arxiv_id
stringlengths 9
12
| paper
stringlengths 2.65k
90.8k
| targets
sequencelengths 4
4
| targets_idx
sequencelengths 4
4
| cite_corpus_id_map
stringlengths 108
31.6k
|
---|---|---|---|---|
2409.12635 | <|paper_start|> Title: EFA-YOLO: An Efficient Feature Attention Model for Fire and Flame Detection
Abstract: EFA-YOLO: An Efficient Feature Attention Model for Fire and Flame Detection: As a natural disaster with high suddenness and great destructiveness, fire has long posed a major threat to human society and ecological environment. In recent years, with the rapid development of smart city and Internet of Things (IoT) technologies, fire detection systems based on deep learning have gradually become a key means to cope with fire hazards. However, existing fire detection models still have many challenges in terms of detection accuracy and real-time performance in complex contexts. To address these issues, we propose two key modules: EAConv (Efficient Attention Convolution) and EADown (Efficient Attention Downsampling). The EAConv module significantly improves the feature extraction efficiency by combining an efficient attention mechanism with depth-separable convolution, while the EADown module enhances the accuracy and efficiency of feature downsampling by utilizing spatial and channel attention mechanisms in combination with pooling operations. Based on these two modules, we design an efficient and lightweight flame detection model, EFA-YOLO (Efficient Feature Attention YOLO). Experimental results show that EFA-YOLO has a model parameter quantity of only 1.4M, GFLOPs of 4.6, and the inference time per image on the CPU is only 22.19 ms. Compared with existing mainstream models (e.g., YOLOv5, YOLOv8, YOLOv9, and YOLOv10), EFA-YOLO exhibits a significant enhancement in detection accuracy (mAP) and inference speed, with model parameter amount is reduced by 94.6 and the inference speed is improved by 88 times.
Introduction
\label{sec1}
As a kind of sudden and extremely destructive disaster <|cite_start|> (Reference: State of Wildfires 2023--2024: Abstract. Climate change contributes to the increased frequency and intensity of wildfires globally, with significant impacts on society and the environment. However, our understanding of the global distribution of extreme fires remains skewed, primarily influenced by media coverage and regionalised research efforts. This inaugural State of Wildfires report systematically analyses fire activity worldwide, identifying extreme events from the March 2023–February 2024 fire season. We assess the causes, predictability, and attribution of these events to climate change and land use and forecast future risks under different climate scenarios. During the 2023–2024 fire season, 3.9×106 km2 burned globally, slightly below the average of previous seasons, but fire carbon (C) emissions were 16 % above average, totalling 2.4 Pg C. Global fire C emissions were increased by record emissions in Canadian boreal forests (over 9 times the average) and reduced by low emissions from African savannahs. Notable events included record-breaking fire extent and emissions in Canada, the largest recorded wildfire in the European Union (Greece), drought-driven fires in western Amazonia and northern parts of South America, and deadly fires in Hawaii (100 deaths) and Chile (131 deaths). Over 232 000 people were evacuated in Canada alone, highlighting the severity of human impact. Our analyses revealed that multiple drivers were needed to cause areas of extreme fire activity. In Canada and Greece, a combination of high fire weather and an abundance of dry fuels increased the probability of fires, whereas burned area anomalies were weaker in regions with lower fuel loads and higher direct suppression, particularly in Canada. Fire weather prediction in Canada showed a mild anomalous signal 1 to 2 months in advance, whereas events in Greece and Amazonia had shorter predictability horizons. Attribution analyses indicated that modelled anomalies in burned area were up to 40 %, 18 %, and 50 % higher due to climate change in Canada, Greece, and western Amazonia during the 2023–2024 fire season, respectively. Meanwhile, the probability of extreme fire seasons of these magnitudes has increased significantly due to anthropogenic climate change, with a 2.9–3.6-fold increase in likelihood of high fire weather in Canada and a 20.0–28.5-fold increase in Amazonia. By the end of the century, events of similar magnitude to 2023 in Canada are projected to occur 6.3–10.8 times more frequently under a medium–high emission scenario (SSP370). This report represents our first annual effort to catalogue extreme wildfire events, explain their occurrence, and predict future risks. By consolidating state-of-the-art wildfire science and delivering key insights relevant to policymakers, disaster management services, firefighting agencies, and land managers, we aim to enhance society's resilience to wildfires and promote advances in preparedness, mitigation, and adaptation. New datasets presented in this work are available from https://doi.org/10.5281/zenodo.11400539 (Jones et al., 2024) and https://doi.org/10.5281/zenodo.11420742 (Kelley et al., 2024a).) <|cite_end|> <|cite_start|> (Reference: Wildland-Urban Interface fire exposure of rural settlements: The case of Montesinho Natural Park: ) <|cite_end|> <|cite_start|> (Reference: Co-occurrence of marine and atmospheric heatwaves with drought conditions and fire activity in the Mediterranean region: ) <|cite_end|>, fire has long posed a serious threat to human society and the natural environment. With the acceleration of urbanization and industrialization, the complexity of building structures and the density of crowds have gradually increased, making the frequency and hazard of fire rise <|cite_start|> (Reference: Integrating communities’ perspectives in understanding disaster risk: ) <|cite_end|> <|cite_start|> (Reference: Forest fire, thinning, and flood in wildland-urban interface: UAV and lidar-based estimate of natural disaster impacts: ) <|cite_end|> <|cite_start|> (Reference: Disaster Incident Analysis via Algebra Stories: ) <|cite_end|>. According to the statistics of the International Fire Protection Organization (IFPO), millions of fire accidents occur globally every year, which cause a large amount of loss of life and property and irreversible damage to the ecological environment. In forested areas, the rapid spread of fire often leads to the destruction of large areas of forest resources, which in turn exacerbates ecological problems such as soil erosion and air pollution.
The diversity and complexity of fire hazards make fire prevention and control particularly important. Fire hazards can come from a variety of sources such as building structures, electronic equipment, improper storage and handling of flammable materials, and are often hidden and sudden. Traditional fire detection techniques rely on devices such as smoke sensors and temperature alarms, and although these devices are able to detect the occurrence of fire to a certain extent, their response efficiency and accuracy are often limited in open spaces, outdoor environments, or in the early stages of a fire <|cite_start|> (Reference: A lightweight fire hazard recognition model for urban subterranean buildings suitable for resource-constrained embedded systems: ) <|cite_end|> <|cite_start|> (Reference: A theoretical framework for improved fire suppression by linking management models with smart early fire detection and suppression technologies: ) <|cite_end|>. Especially in the early stages of a fire, flames are small and not easily captured by conventional sensors, which increases the difficulty of preventing and controlling fire hazards.
Therefore, the development of novel fire detection technologies, especially intelligent fire detection systems based on image processing and deep learning <|cite_start|> (Reference: A one stream three-dimensional convolutional neural network for fire recognition based on spatio-temporal fire analysis: ) <|cite_end|> <|cite_start|> (Reference: Review of Modern Forest Fire Detection Techniques: Innovations in Image Processing and Deep Learning: Fire detection and extinguishing systems are critical for safeguarding lives and minimizing property damage. These systems are especially vital in combating forest fires. In recent years, several forest fires have set records for their size, duration, and level of destruction. Traditional fire detection methods, such as smoke and heat sensors, have limitations, prompting the development of innovative approaches using advanced technologies. Utilizing image processing, computer vision, and deep learning algorithms, we can now detect fires with exceptional accuracy and respond promptly to mitigate their impact. In this article, we conduct a comprehensive review of articles from 2013 to 2023, exploring how these technologies are applied in fire detection and extinguishing. We delve into modern techniques enabling real-time analysis of the visual data captured by cameras or satellites, facilitating the detection of smoke, flames, and other fire-related cues. Furthermore, we explore the utilization of deep learning and machine learning in training intelligent algorithms to recognize fire patterns and features. Through a comprehensive examination of current research and development, this review aims to provide insights into the potential and future directions of fire detection and extinguishing using image processing, computer vision, and deep learning.) <|cite_end|> <|cite_start|> (Reference: Fire Identification Based on Novel Dense Generative Adversarial Networks: ) <|cite_end|>, is of extreme practical importance. These technologies can utilize the visual characteristics of flames to achieve early fire warning and reduce the hazards caused by fires through precise location and fast response. With the popularization of smart city and Internet of Things (IoT) technologies, vision-based fire detection systems will provide more effective solutions for the monitoring and management of modern fire hazards.
In recent years, with the increasing demand for fire prevention and control, the research on fire detection in different scenarios has gradually deepened. Researchers have proposed a variety of improvement methods to address the limitations of the existing fire detection techniques, especially on the problems of flame detection in complex backgrounds, urban fire monitoring with high real-time requirements, and flame detection of small targets, and a variety of innovative algorithms have emerged. To solve these problems, researchers have not only optimized the structure of the detection model, but also adopted technical means such as data enhancement, multi-scale feature extraction and attention mechanism. The work and contributions of some researchers in fire detection are detailed below <|cite_start|> (Reference: YOlOv5s-ACE: Forest Fire Object Detection Algorithm Based on Improved YOLOv5s: ) <|cite_end|> <|cite_start|> (Reference: An efficient lightweight CNN model for real-time fire smoke detection: ) <|cite_end|> <|cite_start|> (Reference: Efficient forest fire detection based on an improved YOLO model: ) <|cite_end|> <|cite_start|> (Reference: Deep Learning Method for Real-Time Fire Detection System for Urban Fire Monitoring and Control: ) <|cite_end|> <|cite_start|> (Reference: A Study of Novel Initial Fire Detection Algorithm Based on Deep Learning Method: ) <|cite_end|> <|cite_start|> (Reference: A lightweight fire detection algorithm for small targets based on YOLOv5s: ) <|cite_end|> <|cite_start|> (Reference: DATFNets-dynamic adaptive assigned transformer network for fire detection: ) <|cite_end|> <|cite_start|> (Reference: Complex Scenes Fire Object Detection Based on Feature Fusion and Channel Attention: ) <|cite_end|> <|cite_start|> (Reference: A lightweight early forest fire and smoke detection method: ) <|cite_end|> <|cite_start|> (Reference: Multiscale fire image detection method based on CNN and Transformer: ) <|cite_end|> <|cite_start|> (Reference: Early Stage Fire Detection System Based on Shallow Guide Deep Network: ) <|cite_end|> <|cite_start|> (Reference: An Improved YOLOv5s Fire Detection Model: ) <|cite_end|> <|cite_start|> (Reference: FS-YOLO: a multi-scale SAR ship detection network in complex scenes: ) <|cite_end|> <|cite_start|> (Reference: Improving YOLOX network for multi-scale fire detection: ) <|cite_end|>.
Wang et al. <|cite_start|> (Reference: YOlOv5s-ACE: Forest Fire Object Detection Algorithm Based on Improved YOLOv5s: ) <|cite_end|> proposed the YOLOv5s-ACE algorithm to address the problems of low detection accuracy, slow detection and rough feature extraction in the context of complex forest fires. The algorithm first extends the small object sample set by Copy-Pasting data enhancement to reduce the risk of overfitting during model training. Secondly, an empty space pyramid pooling (ASPP) module is chosen to replace the SPP module in YOLOv5, which expands the sensory field and improves the accurate localization of small-target forest flames. Finally, the Convolutional Block Attention Module (CBAM) is added to further filter the key features and reduce the background interference. Sun et al. <|cite_start|> (Reference: An efficient lightweight CNN model for real-time fire smoke detection: ) <|cite_end|> proposed the AERNet model to address the problem of the high false detection rate of existing fire detectors in the multi-scale variations of flames and smoke, as well as in the complex background. The model employs a lightweight backbone network, Squeeze-and-Excitation GhostNet, to reduce the number of model parameters and enhance the feature extraction capability. In addition, by constructing a multi-scale detection module, the contributions of different features are selectively emphasized both spatially and channel-wise, which improves the detection accuracy and speed. Cao et al. <|cite_start|> (Reference: Efficient forest fire detection based on an improved YOLO model: ) <|cite_end|> proposed a novel detection technique based on the improved YOLOv5 model to address the failure problem of the existing fire detection methods in detecting small or hidden fires. The efficiency of the YOLOv5 model in feature extraction is improved by incorporating a global attention mechanism and a reparameterized convolution module, and a bidirectional feature pyramid network (BiFPN) is used for feature information fusion, which improves the processing of local information. Yang et al. <|cite_start|> (Reference: Deep Learning Method for Real-Time Fire Detection System for Urban Fire Monitoring and Control: ) <|cite_end|> proposed a YOLOv5s network based on a real-time image processing requirement in urban fire monitoring. based lightweight detection model. The model introduces the Squeeze-and-Excitation module for image filtering and classification to meet the demand for rapid data screening in smart city fire monitoring systems. Yu et al. <|cite_start|> (Reference: A Study of Novel Initial Fire Detection Algorithm Based on Deep Learning Method: ) <|cite_end|> proposed a deep learning-based early fire detection algorithm to solve the problem of high false alarm rate of traditional smoke alarms. The algorithm combines a smoke detector, a thermal imaging camera, and a YOLOv7 model, and ultimately achieves accurate detection of actual fires by eliminating the bounding box of non-fire reports through a deep learning model. Lv et al. <|cite_start|> (Reference: A lightweight fire detection algorithm for small targets based on YOLOv5s: ) <|cite_end|> proposed a lightweight fire detection algorithm based on YOLOv5s to address the challenge of the fire detection algorithm's low recognition rate of small fire targets in complex environments. By introducing the CoT (Contextual Transformer) structure as well as the CSP1\_CoT module in the backbone network, the number of parameters of the model is effectively reduced while the detection ability of small targets is improved. Wang et al. <|cite_start|> (Reference: DATFNets-dynamic adaptive assigned transformer network for fire detection: ) <|cite_end|> proposed the DATFNets framework for the limitations of fire detection in complex contexts and optimized the network performance through a dynamic adaptive allocation strategy and the weighted loss function to optimize the network performance. Cao et al. <|cite_start|> (Reference: Complex Scenes Fire Object Detection Based on Feature Fusion and Channel Attention: ) <|cite_end|> proposed a fire detection method based on feature fusion and channel attention for the detection of small target flames in complex scenes. The method enhances the feature extraction capability by using deformable convolution in the backbone network, and improves the localization capability of small fire targets through the channel attention mechanism. Chen et al. <|cite_start|> (Reference: A lightweight early forest fire and smoke detection method: ) <|cite_end|> proposed the GS-YOLOv5 model, which adopts the Super-SPPF structure and the C3Ghost module to effectively reduce the number of model parameters, and introduces the coordinate attention (CA) module to improve the detection accuracy. Wu et al. <|cite_start|> (Reference: Multiscale fire image detection method based on CNN and Transformer: ) <|cite_end|> proposed a multi-scale fire detection method combining CNN and Transformer, using a CNN module in shallow feature extraction and a Transformer module in deep feature extraction for global sensing. Li et al. <|cite_start|> (Reference: Early Stage Fire Detection System Based on Shallow Guide Deep Network: ) <|cite_end|>, in response to the limitations of traditional fire detectors that are difficult to effectively detect small fires in the early stage, proposed a early fire detection system based on shallow guided depth network (SGDNet). The system first extracts flame features in YCbCr color space, and then fuses the shallow and deep features through the SGD module. The model is optimized by a redesigned backbone, detection head and IoU, which enables efficient detection on embedded devices. Dou et al. <|cite_start|> (Reference: An Improved YOLOv5s Fire Detection Model: ) <|cite_end|> proposed an image-based non-contact fire detection technique for contact fire sensors that are susceptible to interference from non-fire particles. Their study demonstrated the advantages of YOLOv5 in mAP and FPS by comparing eight existing object detection models, and further optimized YOLOv5s network by introducing a CBAM module, BiFPN structure, and inverse convolution, which significantly improves the detection accuracy and processing speed of the model. Wang et al. <|cite_start|> (Reference: FS-YOLO: a multi-scale SAR ship detection network in complex scenes: ) <|cite_end|> In response to the YOLOv7's recognition of small, dense fire-smoke targets with its limitations of YOLOv7 in recognizing small dense fire and smoke targets, the FS-YOLO model was proposed. The model reduces local feature dependency by enhancing the Swin Transformer module and introduces an efficient channel attention mechanism to reduce false alarms in fire detection. In addition, the study developed a dual dataset containing real fire scenes and fire and smoke images to simulate complex conditions such as occlusion and lens blurring. Wang et al. <|cite_start|> (Reference: Improving YOLOX network for multi-scale fire detection: ) <|cite_end|> proposed an improved YOLOX multiscale fire detection method to address the problem that the traditional fire detection methods are ineffective when the range of flame and smoke targets is extended. The method reduces the information loss of high-level feature maps and enhances the feature representation capability by designing a novel feature pyramid model (HC-FPN). In addition, a small target data enhancement strategy is used to extend the forest fire dataset so that the model is more adapted to real forest fire scenarios.
Although researchers have improved the detection accuracy and speed by introducing the attention mechanism, lightweight backbone network, and multi-scale feature extraction, there are still problems such as large number of model parameters and high computational complexity when fire monitoring with high real-time requirements. To cope with these problems, we propose an innovative flame detection model EFA-YOLO (Efficient Feature Attention YOLO). The model realizes efficient feature extraction and downsampling through two key modules: EAConv (Efficient Attention Convolution) and EADown (Efficient Attention Downsampling). The EAConv module combines the efficient attention mechanism and deep separable convolution to EADown module enhances the accuracy and efficiency of feature downsampling by fusing spatial and channel attention mechanisms and pooling operations.
Overall, the contributions of this paper are as follows:
\begin{itemize}
\item \textbf{Two key modules are proposed- EAConv and EADown:} The EAConv module dramatically improves the efficiency and accuracy of feature extraction by means of an efficient attention mechanism and depth-separable convolution. the EADown module combines spatial and channel attention mechanisms and introduces maximal pooling and average pooling operations, which enhance the performance of feature downsampling.
\item \textbf{Designed and implemented an efficient and lightweight fire flame detection model, EFA-YOLO:} The model significantly reduces the number of parameters and computational complexity of the model while maintaining high detection accuracy, which is especially suitable for real-time fire detection applications.EFA-YOLO improves the detection of complex backgrounds through effective multi-scale feature fusion. Compared with mainstream models (e.g., YOLOv5, YOLOv8, YOLOv9, and YOLOv10), EFA-YOLO exhibits significant improvement in both detection accuracy (mAP) and inference speed. Experimental results show that EFA-YOLO reduces the amount of model parameters by 94.6\% and speeds up the inference time by 88 times, which greatly improves the real-time detection.
\item \textbf{Provides an effective lightweight solution for embedded devices and smart city scenarios:} Due to its lightweight design, EFA-YOLO is able to operate efficiently on resource-constrained devices, is suitable for fire monitoring systems in smart cities, and provides a flexible and scalable model architecture for future fire detection technologies.
\end{itemize} <|paper_end|> | [
"<|reference_start|> DATFNets-dynamic adaptive assigned transformer network for fire detection: <|reference_end|>",
"<|reference_start|> Early Stage Fire Detection System Based on Shallow Guide Deep Network: <|reference_end|>",
"<|reference_start|> An Improved YOLOv5s Fire Detection Model: <|reference_end|>",
"<|reference_start|> FS-YOLO: a multi-scale SAR ship detection network in complex scenes: <|reference_end|>"
] | [
17,
21,
36,
37
] | {"<|multi_cite_1_1|>": "ss-2366092", "<|multi_cite_1_2|>": "ss-2366093", "<|multi_cite_1_3|>": "ss-2366094", "<|multi_cite_2_1|>": "ss-2366095", "<|multi_cite_2_2|>": "ss-2366096", "<|multi_cite_2_3|>": "ss-2366097", "<|multi_cite_3_2|>": "ss-2366098", "<|multi_cite_3_3|>": "ss-2366099", "<|multi_cite_4_1|>": "ss-2366100", "<|multi_cite_4_2|>": "ss-2366101", "<|multi_cite_4_3|>": "ss-2366102", "<|multi_cite_5_1|>": "ss-2366103", "<|multi_cite_5_2|>": "ss-2366104", "<|multi_cite_5_3|>": "ss-2366105", "<|multi_cite_5_4|>": "ss-2366106", "<|multi_cite_5_5|>": "ss-2366107", "<|multi_cite_5_6|>": "ss-2366108", "<|multi_cite_5_7|>": "ss-2366109", "<|multi_cite_5_8|>": "ss-2366110", "<|multi_cite_5_9|>": "ss-2366111", "<|multi_cite_5_10|>": "ss-2366112", "<|multi_cite_5_11|>": "ss-2366113", "<|multi_cite_5_12|>": "ss-2366114", "<|multi_cite_5_13|>": "ss-2366115", "<|multi_cite_5_14|>": "ss-2366116", "<|cite_6|>": "ss-2366103", "<|cite_7|>": "ss-2366104", "<|cite_8|>": "ss-2366105", "<|cite_9|>": "ss-2366106", "<|cite_10|>": "ss-2366107", "<|cite_11|>": "ss-2366108", "<|cite_12|>": "ss-2366109", "<|cite_13|>": "ss-2366110", "<|cite_14|>": "ss-2366111", "<|cite_15|>": "ss-2366112", "<|cite_16|>": "ss-2366113", "<|cite_17|>": "ss-2366114", "<|cite_18|>": "ss-2366115", "<|cite_19|>": "ss-2366116"} |
1903.11332 | <|paper_start|> Title: Speed Invariant Time Surface for Learning to Detect Corner Points with Event-Based Cameras
Abstract: Speed Invariant Time Surface for Learning to Detect Corner Points with Event-Based Cameras: We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras ---our implementation processes up to 1.6Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.
Introduction
\label{sec:introduction}
\begin{figure}
\centering
\includegraphics[width=1\linewidth,height=0.8\linewidth]{introv_h.pdf}
\begin{tabular}{cccc}
\hspace{-0.4cm} {\small { \bf (a) Events } } &
\hspace{-0.15cm}{\small {\bf (b) evFAST <|cite_start|> (Reference: {Fast Event-based Corner Detection: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.) <|cite_end|> }}&
\hspace{-0.2cm}{\small{\bf (c) evHarris <|cite_start|> (Reference: Fast Event-based Harris Corner Detection Exploiting the Advantages of Event-driven Cameras: The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.) <|cite_end|> }}&
\hspace{-0.1cm}{\small{\bf (d) Ours }} \end{tabular}
\caption{{\bf(a)} Stream of events generated by an \eb camera moving in front
of a checkerboard pattern. Black dots represents events with a negative
polarity, white dots events with a positive polarity. {\bf(b-c)} Standard \eb
corner detectors <|cite_start|> (Reference: {Fast Event-based Corner Detection: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.) <|cite_end|> <|cite_start|> (Reference: Fast Event-based Harris Corner Detection Exploiting the Advantages of Event-driven Cameras: The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.) <|cite_end|> are not robust to direction changes of
the camera, and the corners cannot be reliably tracked over time without a
very complex tracking scheme. {\bf(d)} By training a classifier to detect
corners from \eb data, our method can reliably detect corners under even
abrupt changes of direction. A simple nearest neighbor tracker produces
continuous trajectories of the corners over time.}
\vspace{-3mm}
\label{fig:intro}
\end{figure}
By capturing very efficiently local illuminance changes~('events'), \eb
cameras <|cite_start|> (Reference: A QVGA 143
dB Dynamic Range Frame-Free PWM Image Sensor With
Lossless Pixel-Level Video Compression and Time-Domain
CDS: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of <;0.25% rms. SNR is >56 dB (9.3 bit) for >10 Lx illuminance.) <|cite_end|> <|cite_start|> (Reference: A 128$\,\times$ 128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers: Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and processing. They feature unique characteristics such as contrast coding under wide illumination variation, micro-second latency response to fast stimuli, and low output data rates (which greatly improves the efficiency of post-processing stages). They can track extremely fast objects (e.g., time resolution is better than 100 kFrames/s video) without special lighting conditions. Their availability has triggered a new range of vision applications in the fields of surveillance, motion analyses, robotics, and microscopic dynamic observations. One key DVS feature is contrast sensitivity, which has so far been reported to be in the 10-15% range. In this paper, a novel pixel photo sensing and transimpedance pre-amplification stage makes it possible to improve by one order of magnitude contrast sensitivity (down to 1.5%) and power (down to 4 mW), reduce the best reported FPN (Fixed Pattern Noise) by a factor of 2 (down to 0.9%), while maintaining the shortest reported latency (3 μs) and good Dynamic Range (120 dB), and further reducing overall area (down to 30 × 31 μm per pixel). The only penalty is the limitation of intrascene Dynamic Range to 3 decades. A 128 × 128 DVS test prototype has been fabricated in standard 0.35 μm CMOS and extensive experimental characterization results are provided.) <|cite_end|> open the door to novel very fast
and low-power computer vision algorithms able to deal with large dynamic
ranges <|cite_start|> (Reference: {Event-Based Visual Flow: This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost.) <|cite_end|> <|cite_start|> (Reference: Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera: ) <|cite_end|> <|cite_start|> (Reference: EVO: A geometric approach to event-based 6-dof parallel tracking and mapping in real time: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.) <|cite_end|> <|cite_start|> (Reference: A Low Power, High Throughput, Fully Event-Based Stereo
System: We introduce a stereo correspondence system implemented fully on event-based digital hardware, using a fully graph-based non von-Neumann computation model, where no frames, arrays, or any other such data-structures are used. This is the first time that an end-to-end stereo pipeline from image acquisition and rectification, multi-scale spatiotemporal stereo correspondence, winner-take-all, to disparity regularization is implemented fully on event-based hardware. Using a cluster of TrueNorth neurosynaptic processors, we demonstrate their ability to process bilateral event-based inputs streamed live by Dynamic Vision Sensors (DVS), at up to 2,000 disparity maps per second, producing high fidelity disparities which are in turn used to reconstruct, at low power, the depth of events produced from rapidly changing scenes. Experiments on real-world sequences demonstrate the ability of the system to take full advantage of the asynchronous and sparse nature of DVS sensors for low power depth reconstruction, in environments where conventional frame-based cameras connected to synchronous processors would be inefficient for rapidly moving objects. System evaluation on event-based sequences demonstrates a ~ 200 × improvement in terms of power per pixel per disparity map compared to the closest state-of-the-art, and maximum latencies of up to 11ms from spike injection to disparity map ejection.) <|cite_end|>. However, because the
events are created asynchronously, as shown in Fig.~\ref{fig:intro}~{(a)}, novel
algorithms have to be developed to perform fundamental computer vision tasks
that are typically performed on regular frame images.
One of the main fundamental tasks is feature point detection, which is important
for applications with very strong dynamics such as UAV navigation, where motion
blur makes classical frame-based approaches less robust, or visual odometry in
High Dynamic Range~(HDR) conditions, among others. Inspired by the vast
literature on frame-based feature point detection, some works adapted
frame-based corner detector to \eb data. Typically, a local spatial descriptor
is built around an event, for example by cumulating events in a given time
window <|cite_start|> (Reference: Fast Event-based Harris Corner Detection Exploiting the Advantages of Event-driven Cameras: The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.) <|cite_end|>, or by considering the times of arrival of the
events <|cite_start|> (Reference: {Fast Event-based Corner Detection: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.) <|cite_end|>. Then, a classical test, such as <|cite_start|> (Reference: A combined corner and edge detector: The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.) <|cite_end|> can
be applied to this 2D spatial neighborhood. However, the resulting detectors do
not take into consideration the specific characteristics of \eb data, such as
different noise patterns, responses to changes of direction, illumination
changes, etc. Even if efforts have been made in order to design better tests
for \eb cameras <|cite_start|> (Reference: {Fast Event-based Corner Detection: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.) <|cite_end|> <|cite_start|> (Reference: Asynchronous Corner Detection and Tracking for Event Cameras in Real Time: The recent emergence of bioinspired event cameras has opened up exciting new possibilities in high-frequency tracking, bringing robustness to common problems in traditional vision, such as lighting changes and motion blur. In order to leverage these attractive attributes of the event cameras, research has been focusing on understanding how to process their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream essentially forming frames of events grouped according to their timestamp, we are still to exploit the power of these cameras. In this spirit, this letter proposes a new, purely event-based corner detector, and a novel corner tracker, demonstrating that it is possible to detect corners and track them directly on the event stream in real time. Evaluation on benchmarking datasets reveals a significant boost in the number of detected corners and the repeatability of such detections over the state of the art even in challenging scenarios with the proposed approach while enabling more than a 4$\times$ speed-up when compared to the most efficient algorithm in the literature. The proposed pipeline detects and tracks corners at a rate of more than 7.5 million events per second, promising great impact in high-speed applications.) <|cite_end|>, hand-crafted detectors remain
unstable and corners can not be reliably detected over
time Fig.~\ref{fig:intro}~{(b-c)}.
In this paper, we propose a learning approach to \eb feature detection. We
train a classifier to label individual events as generated by a moving feature
point or not. The main advantage of taking a learning approach is to obtain
more stable corners: As shown in Fig.~\ref{fig:intro}, a typical error made by
previous detectors is that they are sensitive to changes of the
apparent motion of the feature points. This is because corners in \eb cameras
are not invariant under changes of direction, by contrast to corners in
intensity images. Previous detectors also often erroneously detect points along
edges because of noisy events, while such points cannot be detected in a stable
way. Learning makes the detection more robust to motion changes and noise,
without having to manually design an \emph{ad hoc} method.
Our classification approach relies on a novel formulation of the Time
Surface <|cite_start|> (Reference: Asynchronous frameless event-based optical flow: ) <|cite_end|> <|cite_start|> (Reference: {Fast Event-based Corner Detection: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.) <|cite_end|>, which is another contribution of this
work. The Time Surface is a representation that accumulates the information from events
over time, and is a common tool used in \eb vision, including to detect
corner points. In our work, we also use a Time Surface as input to the
classifier, but we show how to efficiently create a Time Surface that is
invariant to the objects' speed. Previous work <|cite_start|> (Reference: {ACE: An efficient asynchronous corner tracker for event cameras: The emergence of bio-inspired event cameras has opened up new exciting possibilities in high-frequency tracking, overcoming some of the limitations of traditional frame-based vision (e.g. motion blur during high-speed motions or saturation in scenes with high dynamic range). As a result, research has been focusing on the processing of their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream into frame-like representations, we are yet to harness the true power of these cameras. In this paper, we propose the ACE tracker: a purely asynchronous framework to track corner-event features. Evaluation on benchmarking datasets reveals significant improvements in accuracy and computational efficiency in comparison to state-of-the-art event-based trackers. ACE achieves robust performance even in challenging scenarios, where traditional frame-based vision algorithms fail.) <|cite_end|> already
introduced a method for computing a time surface invariant to speed, however
it is still too slow to compute,
and incompatible with the high frequency of \eb
cameras. The invariance to speed of our Time Surface is important both to achieve
classification performance and also to keep the classifier small,
which makes computation fast.
One critical aspect of our learning-based approach is indeed that classification
must be performed extremely fast, otherwise we would lose the advantage of the
capture efficiency of \eb cameras. We therefore chose to use a Random Forest,
as Random Forests are very efficient without having to use a GPU (unlike Deep
Networks), which would be counter-productive since we target low-power
applications. In fact, parallelizing computation as done by GPUs is not well adapted
to the sparse and asynchronous nature of the events.
Our current implementation processes up to $1.6\cdot10^6$ events
per second on a single CPU.
To evaluate the quality of our detector, we also release a new high-resolution
benchmark dataset. We propose a metric which is independent of ground truth
keypoints extracted from gray level images, which are often used but would
introduce a strong bias in the evaluation. Specifically, we compare our
approach to different detectors in combination with a simple nearest neighbor
based tracking, showing that, thanks to the temporal continuity of the events, a
very simple tracking rule can lead to state-of-the-art results.
\vspace{-1mm} <|paper_end|> | [
"<|reference_start|> A QVGA 143\ndB Dynamic Range Frame-Free PWM Image Sensor With\nLossless Pixel-Level Video Compression and Time-Domain\nCDS: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of <;0.25% rms. SNR is >56 dB (9.3 bit) for >10 Lx illuminance. <|reference_end|>",
"<|reference_start|> EVO: A geometric approach to event-based 6-dof parallel tracking and mapping in real time: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras. <|reference_end|>",
"<|reference_start|> A Low Power, High Throughput, Fully Event-Based Stereo\nSystem: We introduce a stereo correspondence system implemented fully on event-based digital hardware, using a fully graph-based non von-Neumann computation model, where no frames, arrays, or any other such data-structures are used. This is the first time that an end-to-end stereo pipeline from image acquisition and rectification, multi-scale spatiotemporal stereo correspondence, winner-take-all, to disparity regularization is implemented fully on event-based hardware. Using a cluster of TrueNorth neurosynaptic processors, we demonstrate their ability to process bilateral event-based inputs streamed live by Dynamic Vision Sensors (DVS), at up to 2,000 disparity maps per second, producing high fidelity disparities which are in turn used to reconstruct, at low power, the depth of events produced from rapidly changing scenes. Experiments on real-world sequences demonstrate the ability of the system to take full advantage of the asynchronous and sparse nature of DVS sensors for low power depth reconstruction, in environments where conventional frame-based cameras connected to synchronous processors would be inefficient for rapidly moving objects. System evaluation on event-based sequences demonstrates a ~ 200 × improvement in terms of power per pixel per disparity map compared to the closest state-of-the-art, and maximum latencies of up to 11ms from spike injection to disparity map ejection. <|reference_end|>",
"<|reference_start|> A combined corner and edge detector: The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed. <|reference_end|>"
] | [
4,
8,
9,
12
] | {"<|cite_1|>": "ss-964225", "<|cite_2|>": "ss-1210878", "<|multi_cite_3_1|>": "ss-964225", "<|multi_cite_3_2|>": "ss-1210878", "<|multi_cite_4_2|>": "ss-904708", "<|multi_cite_4_3|>": "ss-842991", "<|multi_cite_5_1|>": "ss-959172", "<|multi_cite_5_2|>": "ss-737406", "<|multi_cite_5_3|>": "ss-765048", "<|multi_cite_5_4|>": "ss-891716", "<|cite_6|>": "ss-1210878", "<|cite_7|>": "ss-964225", "<|cite_8|>": "ss-1515232", "<|multi_cite_9_1|>": "ss-964225", "<|multi_cite_9_2|>": "ss-1201832", "<|multi_cite_10_1|>": "ss-1273219", "<|multi_cite_10_2|>": "ss-964225", "<|cite_11|>": "ss-767088"} |
1307.1101 | <|paper_start|> Title: Mixed-Timescale Precoding and Cache Control in Cached MIMO Interference Network
Abstract: Mixed-Timescale Precoding and Cache Control in Cached MIMO Interference Network: Consider media streaming in MIMO interference networks whereby multiple base stations (BS) simultaneously deliver media to their associated users using fixed data rates. The performance is fundamentally limited by the cross-link interference. We propose a cache-induced opportunistic cooperative MIMO (CoMP) for interference mitigation. By caching a portion of the media files, the BSs opportunistically employ CoMP to transform the cross-link interference into spatial multiplexing gain. We study a mixed-timescale optimization of MIMO precoding and cache control to minimize the transmit power under the rate constraint. The cache control is to create more CoMP opportunities and is adaptive to the long-term popularity of the media files. The precoding is to guarantee the rate requirement and is adaptive to the channel state information and cache state at the BSs. The joint stochastic optimization problem is decomposed into a short-term precoding and a long-term cache control problem. We propose a precoding algorithm which converges to a stationary point of the short-term problem. Based on this, we exploit the hidden convexity of the long-term problem and propose a low complexity and robust solution using stochastic subgradient. The solution has significant gains over various baselines and does not require explicit knowledge of the media popularity.
Introduction
Media streaming is going to be one of the major applications in wireless
networks. For example, it is envisioned that a significant portion
of the capacity demand in future wireless systems will come from media
streaming applications. In this paper, we consider media streaming
in MIMO interference networks whereby multiple BSs simultaneously
deliver media to their associated users using fixed data rates. The
performance of this system is fundamentally limited by the inter-cell
interference from the cross-links. In traditional cellular networks,
the inter-cell interference is mitigated using frequency planing techniques
such as frequency reuse or fractional frequency reuse <|cite_start|> (Reference: Fractional Frequency Reuse and Interference Suppression for OFDMA Networks: The downlink performance of cellular networks is known to be strongly limited by inter-cell interference. In order to mitigate this interference, a number of frequency reuse schemes have recently been proposed. This paper discusses a novel fractional frequency reuse (FFR) scheme combined with interference suppression for orthogonal frequency division multiple access (OFDMA) networks, which are currently being considered in LTE-A and WiMAX IEEE 802.16m standardization processes. We confine to the case of cell edge users and show that the novel FFR scheme improves the spectral efficiency by allowing one out-of-cell interference. Then the proposed subcarrier and rate allocation ensures interference exploitation by the mobile station (MS) which results in the reduction of power consumption at the base stations (BSs). Interestingly no inter-cell interference coordination but only a priori frequency planning is required in the proposed scheme.) <|cite_end|>.
To further improve the spectrum efficiency, more advanced techniques
such as cooperative MIMO (CoMP) <|cite_start|> (Reference: Cooperative multicell zero-forcing beamforming in cellular downlink channels: In this work, a multicell cooperative zero-forcing beamforming (ZFBF) scheme combined with a simple user selection procedure is considered for the Wyner cellular downlink channel. The approach is to transmit to the user with the ldquobestrdquo local channel in each cell. The performance of this suboptimal scheme is investigated in terms of the conventional sum-rate scaling law and the sum-rate offset for an increasing number of users per cell. We term this characterization of the sum-rate for large number of users as high-load regime characterization, and point out the similarity of this approach to the standard affine approximation used in the high-signal-to-noise ratio (SNR) regime. It is shown that, under an overall power constraint, the suboptimal cooperative multicell ZFBF scheme achieves the same sum-rate growth rate and slightly degraded offset law, when compared to an optimal scheme deploying joint multicell dirty-paper coding (DPC), asymptotically with the number of users per cell. Moreover, the overall power constraint is shown to ensure in probability, equal per-cell power constraints when the number of users per-cell increases.) <|cite_end|> and
coordinated MIMO <|cite_start|> (Reference: Coordinating Multiple Antenna Cellular Networks to Achieve Enormous Spectral Efficiency: Intercell interference limits the capacity of wireless networks. To mitigate this interference we explore coherently coordinated transmission (CCT) from multiple base stations to each user. To treat users fairly, we explore equal rate (ER) networks. We evaluate the downlink network efficiency of CCT as compared to serving each user with single base transmission (SBT) with a separate base uniquely assigned to each user. Efficiency of ER networks is measured as total network throughput relative to the number of network antennas at 10% user outage. Efficiency is compared relative to the baseline of single base transmission with power control, (ER-SBT), where base antenna transmissions are not coordinated and apart from power control and the assignment of 10% of the users to outage, nothing is done to mitigate interference. We control the transmit power of ER systems to maximise the common rate for ER-SBT, ER-CCT based on zero forcing, and ER-CCT employing dirty paper coding. We do so for (no. of transmit antennas per base, no. of receive antennas per user) equal to (1,1), (2,2) and (4,4). We observe that CCT mutes intercell interference enough, so that enormous spectral efficiency improvement associated with using multiple antennas in isolated communication links occurs as well for the base-to-user links in a cellular network.) <|cite_end|> have been proposed
for future wireless systems. The CoMP technique can transform the
cross-link interference into spatial multiplexing gain by sharing
both real-time channel state information (CSI) and payload data among
the concerned BSs. However, it requires high capacity backhaul for
payload exchange between BSs, which is a cost bottleneck especially
in dense small cell networks. On the other hand, the coordinated MIMO
is a more cost effective technique as it only requires the exchange
of real-time CSIs among the BSs to perform joint precoding. Many MIMO
precoding optimization algorithms have been proposed for coordinated
MIMO. For example, in <|cite_start|> (Reference: {An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel: Consider the MIMO interfering broadcast channel whereby multiple base stations in a cellular network simultaneously transmit signals to a group of users in their own cells while causing interference to the users in other cells. The basic problem is to design linear beamformers that can maximize the system throughput. In this paper we propose a linear transceiver design algorithm for weighted sum-rate maximization that is based on iterative minimization of weighted mean squared error (MSE). The proposed algorithm only needs local channel knowledge and converges to a stationary point of the weighted sum-rate maximization problem. Furthermore, we extend the algorithm to a general class of utility functions and establish its convergence. The resulting algorithm can be implemented in a distributed asynchronous manner. The effectiveness of the proposed algorithm is validated by numerical experiments.) <|cite_end|>, a WMMSE algorithm is
proposed to find a stationary point of the weighted sum-rate maximization
problem for multi-cell downlink systems. In <|cite_start|> (Reference: Duality, Polite Water-filling, and Optimization for MIMO B-MAC Interference Networks and iTree Networks: This paper gives the long sought network version of water-filling named as polite water-filling. Unlike in single-user MIMO channels, where no one uses general purpose optimization algorithms in place of the simple and optimal water-filling for transmitter optimization, the traditional water-filling is generally far from optimal in networks as simple as MIMO multiaccess channels (MAC) and broadcast channels (BC), where steepest ascent algorithms have been used except for the sum-rate optimization. This is changed by the polite water-filling that is optimal for all boundary points of the capacity regions of MAC and BC and for all boundary points of a set of achievable regions of a more general class of MIMO B-MAC interference networks, which is a combination of multiple interfering broadcast channels, from the transmitter point of view, and multiaccess channels, from the receiver point of view, including MAC, BC, interference channels, X networks, and most practical wireless networks as special case. It is polite because it strikes an optimal balance between reducing interference to others and maximizing a link's own rate. Employing it, the related optimizations can be vastly simplified by taking advantage of the structure of the problems. Deeply connected to the polite water-filling, the rate duality is extended to the forward and reverse links of the B-MAC networks. As a demonstration, weighted sum-rate maximization algorithms based on polite water-filling and duality with superior performance and low complexity are designed for B-MAC networks and are analyzed for Interference Tree (iTree) Networks, a sub-class of the B-MAC networks that possesses promising properties for further information theoretic study.) <|cite_end|> <|cite_start|> (Reference: MIMO B-MAC Interference Network Optimization under Rate Constraints by Polite Water-filling and Duality: We take two new approaches to design efficient algorithms for transmitter optimization under rate constraints, to guarantee the Quality of Service in general MIMO interference networks, which is a combination of multiple interfering broadcast channels (BC) and multiaccess channels (MAC) and is named B-MAC Networks. Two related optimization problems, maximizing the minimum of weighted rates under a sum-power constraint and minimizing the sum-power under rate constraints, are considered. The first approach takes advantage of existing efficient algorithms for SINR problems by building a bridge between rate and SINR through the design of optimal mappings between them. The approach can be applied to other optimization problems as well. The second approach employs polite water-filling, which is the optimal network version of water-filling that we recently found. It replaces most generic optimization algorithms currently used for networks and reduces the complexity while demonstrating superior performance even in non-convex cases. Both centralized and distributed algorithms are designed and the performance is analyzed in addition to numeric examples.) <|cite_end|> <|cite_start|> (Reference: Polite Water-Filling for Weighted Sum-Rate Maximization in MIMO B-MAC Networks Under Multiple Linear Constraints: Optimization under multiple linear constraints is important for practical systems with individual power constraints, per-antenna power constraints, and/or interference constraints as in cognitive radios. While for single-user multiple-input multiple-output (MIMO) channel transmitter optimization, no one uses general purpose convex programming because water-filling is optimal and much simpler, it is not true for MIMO multiaccess channels (MAC), broadcast channels (BC), and the nonconvex optimization of interference networks because the traditional water-filling is far from optimal for networks. We recently found the right form of water-filling, polite water-filling, for capacity or achievable regions of the general MIMO interference networks, named B-MAC networks, which include BC, MAC, interference channels, X networks, and most practical wireless networks as special cases. In this paper, we extend the polite water-filling results from a single linear constraint to multiple linear constraints and use weighted sum-rate maximization as an example to show how to design high efficiency and low complexity algorithms, which find optimal solution for convex cases and locally optimal solution for nonconvex cases. Several times faster convergence speed and orders of magnitude higher accuracy than the state-of-the-art are demonstrated by numerical examples.) <|cite_end|>,
the authors proposed \textit{polite water-filling} method for precoding
optimization in B-MAC interference networks based on the duality principle
of interference networks. Although the coordinated MIMO requires smaller
backhaul capacity, the overall performance is usually much lower than
that of CoMP. Recently, there have been some works conducted on multi-cell
coordination with consideration of backhaul limitation. In <|cite_start|> (Reference: Distributed
multicell beamforming with limited intercell coordination: This paper studies distributed optimization schemes for multicell joint beamforming and power allocation in time-division-duplex (TDD) multicell downlink systems where only limited-capacity intercell information exchange is permitted. With an aim to maximize the worst-user signal-to-interference-and-noise ratio (SINR), we devise a hierarchical iterative algorithm to optimize downlink beamforming and intercell power allocation jointly in a distributed manner. The proposed scheme is proved to converge to the global optimum. For fast convergence and to reduce the burden of intercell parameter exchange, we further propose to exploit previous iterations adaptively. Results illustrate that the proposed scheme can achieve near-optimal performance even with a few iterations, hence providing a good tradeoff between performance and backhaul consumption. The performance under quantized parameter exchange is also examined.) <|cite_end|>,
a distributed and hierarchical solution of joint beamforming and power
allocation was proposed to maximize the worst-user SINR in time-division-duplex
(TDD) multicell downlink systems where only limited inter-cell information
exchange is permitted. In <|cite_start|> (Reference: Joint Beamforming and Power Control in Coordinated Multicell: Max-Min Duality, Effective Network and Large System Transition: This paper studies joint beamforming and power control in a coordinated multicell downlink system that serves multiple users per cell to maximize the minimum weighted signal-to-interference-plus-noise ratio. The optimal solution and distributed algorithm with geometrically fast convergence rate are derived by employing the nonlinear Perron-Frobenius theory and the multicell network duality. The iterative algorithm, though operating in a distributed manner, still requires instantaneous power update within the coordinated cluster through the backhaul. The backhaul information exchange and message passing may become prohibitive with increasing number of transmit antennas and increasing number of users. In order to derive asymptotically optimal solution, random matrix theory is leveraged to design a distributed algorithm that only requires statistical information. The advantage of our approach is that there is no instantaneous power update through backhaul. Moreover, by using nonlinear Perron-Frobenius theory and random matrix theory, an effective primal network and an effective dual network are proposed to characterize and interpret the asymptotic solution.) <|cite_end|>, random matrix
theory is leveraged to design a distributed joint beamforming and
power control algorithm that only requires statistical information.
Such design reduces the amount of control signaling over the backhaul.
An interesting question is, can we achieve the CoMP gain with reduced
backhaul bandwidth consumption? We show that this is possible for
media streaming applications by using a novel \textit{cache-induced
opportunistic CoMP} scheme proposed in this paper. Specifically, we
can opportunistically transform the interference network into a CoMP
broadcast channel by caching a portion of the media files at the BSs.
As a result, there are two transmission modes at the physical layer,
namely, the \textit{CoMP mode} and the \textit{coordinated MIMO mode},
depending on the cache state at the BSs. If the payload data accessed
by each user exists in the cache of the BSs, the BSs can engage in
CoMP and therefore, enjoy a large performance gain without consuming
the backhaul bandwidth. Otherwise, coordinated MIMO is employed at
the BSs to serve the users. Hence, there is a cache-induced topology
change in the physical layer (dynamic CoMP opportunity) of the MIMO
interference network. As such, a MIMO interference network employing
the cache-induced opportunistic CoMP is called a \textit{cached MIMO
interference network} in this paper. With high capacity caches at
the BSs and a proper caching strategy, the opportunity of CoMP in
the cached MIMO interference network can be very large and thus the
proposed solution will have a significant gain over the coordinated
MIMO scheme with even smaller backhaul consumption. Note that in the
proposed solution, the reduced backhaul consumption is due to the
reduced payload data transmission over the backhaul. The payload data
transmission consumes much more backhaul bandwidth than the exchange
of control signaling because the former needs to be done on a per-symbol
basis but the latter needs to be done on a per frame basis. Hence
the backhaul saving of the proposed solution is much more significant
compared to those only reduce the control signaling in the backhaul <|cite_start|> (Reference: Distributed
multicell beamforming with limited intercell coordination: This paper studies distributed optimization schemes for multicell joint beamforming and power allocation in time-division-duplex (TDD) multicell downlink systems where only limited-capacity intercell information exchange is permitted. With an aim to maximize the worst-user signal-to-interference-and-noise ratio (SINR), we devise a hierarchical iterative algorithm to optimize downlink beamforming and intercell power allocation jointly in a distributed manner. The proposed scheme is proved to converge to the global optimum. For fast convergence and to reduce the burden of intercell parameter exchange, we further propose to exploit previous iterations adaptively. Results illustrate that the proposed scheme can achieve near-optimal performance even with a few iterations, hence providing a good tradeoff between performance and backhaul consumption. The performance under quantized parameter exchange is also examined.) <|cite_end|> <|cite_start|> (Reference: Joint Beamforming and Power Control in Coordinated Multicell: Max-Min Duality, Effective Network and Large System Transition: This paper studies joint beamforming and power control in a coordinated multicell downlink system that serves multiple users per cell to maximize the minimum weighted signal-to-interference-plus-noise ratio. The optimal solution and distributed algorithm with geometrically fast convergence rate are derived by employing the nonlinear Perron-Frobenius theory and the multicell network duality. The iterative algorithm, though operating in a distributed manner, still requires instantaneous power update within the coordinated cluster through the backhaul. The backhaul information exchange and message passing may become prohibitive with increasing number of transmit antennas and increasing number of users. In order to derive asymptotically optimal solution, random matrix theory is leveraged to design a distributed algorithm that only requires statistical information. The advantage of our approach is that there is no instantaneous power update through backhaul. Moreover, by using nonlinear Perron-Frobenius theory and random matrix theory, an effective primal network and an effective dual network are proposed to characterize and interpret the asymptotic solution.) <|cite_end|>. Since the cost of hard
disks is much lower than the cost of optical fiber backhaul, the proposed
solution is very cost effective.
The performance of the proposed solution depends heavily on the dynamic
caching strategy (which affects the opportunity of CoMP) and the MIMO
precoding design. We study a mixed-timescale joint optimization of
MIMO precoding and cache control in cached MIMO interference networks
to minimize the average sum transmit power subject to fixed data rate
constraints for all users. The role of cache control is to create
more CoMP opportunities and is adaptive to long-term popularity of
the media files (long-term control). The role of MIMO precoding optimization
is to exploit the CoMP opportunities (induced by the cache) to guarantee
the individual rate constraints for each user. As such, it is adaptive
to the instantaneous CSI and the \textit{cache state} at the BSs.
There are several first order technical challenges to be addressed.
\begin{itemize}
\item \textbf{Limited Cache Size}: The performance gain of the proposed
scheme depends heavily on the CoMP opportunity, which in turn depends
on the cache size and cache strategy. The BSs usually do not have
enough cache to store all the media files. As will be shown in Example
\ref{Naive-cahce-scheme}, when brute force caching is used, even
if a significant portion of the media files are cached at BSs, the
CoMP opportunity can still be very small and this is highly undesirable.
\item \textbf{Non-Convex Stochastic Optimization}: The mixed-timescale joint
optimization of MIMO precoding and cache control is a non-convex stochastic
optimization problem and the complexity of finding the optimal solution
is extremely high. For example, the short-term MIMO precoding optimization
in the interference networks is well known to be a difficult non-convex
problem. Furthermore, the objective function for long-term cache control
has no closed form expression because the short-term precoding problem
has no closed form solution and the popularity of the media files
is in general unknown.
\item \textbf{Complex Coupling between Cache Control and Precoding} \textbf{Optimization}:
Caching has been widely used in fixed line P2P systems <|cite_start|> (Reference: Peer assisted video streaming with supply-demand-based cache optimization: In this paper, we consider a hybrid P2P video on-demand architecture that utilizes both the server and the peer resources for efficient transmission of popular videos. In our system architecture, each peer dedicates some cache space to store a particular segment of a video file as well as some of its upload bandwidth to serve the cached segment to other peers. Peers join the system and issue a streaming request to a control server. Control server directs the peers to streaming servers or to other peers who have the desired video segments. Control server also decides which peer should cache which video segment. Our main contribution in this paper is to determine the proper caching strategies at peers such that we minimize the average load on the streaming servers. To minimize the server load, we pose the caching problem as a supply-demand-based utility optimization problem. By exploiting the inherent structure of a typical on-demand streaming application as well as the availability of a global view on the current supply-demand at the control server, we demonstrate how the system performance can be significantly improved over the brute-force caching decisions. In our analysis, we mainly consider three caching mechanisms. In the first mechanism (cache prefetching), a segment is prefetched to a given peer for caching purposes upon peer's arrival to the system regardless of whether that segment is currently demanded by that peer or not. In the second mechanism (opportunistic cache update), a peer has the option of replacing the segment that is currently in its cache with the last segment that it finished streaming. In the third mechanism, we combine both mechanisms as a hybrid caching strategy. In particular, we find that a dynamic-programming (DP)-based utility maximization solution using only the cache update method performs significantly better in reducing the server load. Furthermore, our findings suggest that even less sophisticated cache update solutions can perform almost as good as prefetching strategies in interesting regions of operation.) <|cite_end|>
and content distribution networks (CDNs) <|cite_start|> (Reference: Caching strategies in transcoding-enabled proxy systems for streaming media distribution networks: With the wide availability of high-speed network access, we are experiencing high quality streaming media delivery over the Internet. The emergence of ubiquitous computing enables mobile users to access the Internet with their laptops, PDAs, or even cell phones. When nomadic users connect to the network via wireless links or phone lines, high quality video transfer can be problematic due to long delay or size mismatch between the application display and the screen. Our proposed solution to this problem is to enable network proxies with the transcoding capability, and hence provide different, appropriate video quality to different network environment. The proxies in our transcoding-enabled caching (TeC) system perform transcoding as well as caching for efficient rich media delivery to heterogeneous network users. This design choice allows us to perform content adaptation at the network edges. We propose three different TeC caching strategies. We describe each algorithm and discuss its merits and shortcomings. We also study how the user access pattern affects the performance of TeC caching algorithms and compare them with other approaches. We evaluate TeC performance by conducting two types of simulation. Our first experiment uses synthesized traces while the other uses real traces derived from an enterprise media server logs. The results indicate that compared with the traditional network caches, with marginal transcoding load, TeC improves the cache effectiveness, decreases the user-perceived latency, and reduces the traffic between the proxy and the content origin server.) <|cite_end|>. In <|cite_start|> (Reference: FemtoCaching: Wireless Video Content Delivery through Distributed Caching Helpers: Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as "helpers"). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of $1-(1-1/d)^d$, where $d$ is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.) <|cite_end|>, a FemtoCaching scheme has also
been proposed for wireless systems. However, these schemes do not
consider cache-induced opportunistic CoMP among the BSs. Hence, the
cache control in the above works is independent of the physical layer
and is fundamentally different from our case where the cache control
and physical layer are coupled together. In our case, the cache control
will affect the physical layer dynamics seen by\textbf{ }precoding\textbf{
}optimization due to different CoMP opportunities. On the other hand,
the short-term precoding strategy adopted in the physical layer will
also affect the cache control due to a different cost-reward dynamic.
\end{itemize}
To address the above challenges, we first propose a novel cache data
structure called \textit{MDS-coded random cache} which can significantly
improve the probability of CoMP. We then exploit the timescale separations
of the optimization variables to decompose the stochastic optimization
problem into a \textit{short-term precoding problem} and a \textit{long-term
stochastic cache control problem}. We generalize the WMMSE approach
in <|cite_start|> (Reference: {An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel: Consider the MIMO interfering broadcast channel whereby multiple base stations in a cellular network simultaneously transmit signals to a group of users in their own cells while causing interference to the users in other cells. The basic problem is to design linear beamformers that can maximize the system throughput. In this paper we propose a linear transceiver design algorithm for weighted sum-rate maximization that is based on iterative minimization of weighted mean squared error (MSE). The proposed algorithm only needs local channel knowledge and converges to a stationary point of the weighted sum-rate maximization problem. Furthermore, we extend the algorithm to a general class of utility functions and establish its convergence. The resulting algorithm can be implemented in a distributed asynchronous manner. The effectiveness of the proposed algorithm is validated by numerical experiments.) <|cite_end|> to find a stationary point for the short-term
precoding problem. To solve the long-term cache control problem, we
first show that despite the non-convexity in the short-term precoding
problem, there is a hidden convexity in the long-term stochastic cache
control problem. We propose a stochastic-subgradient-like iterative
solution and show that it converges to the optimal solution of this
long-term stochastic optimization problem. The proposed solution has
low complexity and does not require explicit knowledge of the popularity
of the media files. Finally, we illustrate with simulations that the
proposed solution achieves significant gain over various baselines
under the consideration of overhead in the backhaul.
\textit{Notation}\emph{s}: The superscript $\left(\cdot\right)^{\dagger}$
denotes Hermitian. The notation $1\left(\cdot\right)$ denote the
indication function such that $1\left(E\right)=1$ if the event $E$
is true and $1\left(E\right)=0$ otherwise. The notation $\left[\mathbf{A}\right]_{i,j}$
represents the element at the $i$-th row and $j$-th column of a
matrix $\mathbf{A}$. For a square matrix $\mathbf{A}$, $\left|\mathbf{A}\right|$
denotes the determinant of $\mathbf{A}$ and $\mathbf{A}\succeq\mathbf{0}$
means that $\mathbf{A}$ is positive semidefinite. The notation $\left[a_{k}\right]_{k=1,...,K}$
denote a $K\times1$ vector whose $k$-th element is $a_{k}$. <|paper_end|> | [
"<|reference_start|> Coordinating Multiple Antenna Cellular Networks to Achieve Enormous Spectral Efficiency: Intercell interference limits the capacity of wireless networks. To mitigate this interference we explore coherently coordinated transmission (CCT) from multiple base stations to each user. To treat users fairly, we explore equal rate (ER) networks. We evaluate the downlink network efficiency of CCT as compared to serving each user with single base transmission (SBT) with a separate base uniquely assigned to each user. Efficiency of ER networks is measured as total network throughput relative to the number of network antennas at 10% user outage. Efficiency is compared relative to the baseline of single base transmission with power control, (ER-SBT), where base antenna transmissions are not coordinated and apart from power control and the assignment of 10% of the users to outage, nothing is done to mitigate interference. We control the transmit power of ER systems to maximise the common rate for ER-SBT, ER-CCT based on zero forcing, and ER-CCT employing dirty paper coding. We do so for (no. of transmit antennas per base, no. of receive antennas per user) equal to (1,1), (2,2) and (4,4). We observe that CCT mutes intercell interference enough, so that enormous spectral efficiency improvement associated with using multiple antennas in isolated communication links occurs as well for the base-to-user links in a cellular network. <|reference_end|>",
"<|reference_start|> {An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel: Consider the MIMO interfering broadcast channel whereby multiple base stations in a cellular network simultaneously transmit signals to a group of users in their own cells while causing interference to the users in other cells. The basic problem is to design linear beamformers that can maximize the system throughput. In this paper we propose a linear transceiver design algorithm for weighted sum-rate maximization that is based on iterative minimization of weighted mean squared error (MSE). The proposed algorithm only needs local channel knowledge and converges to a stationary point of the weighted sum-rate maximization problem. Furthermore, we extend the algorithm to a general class of utility functions and establish its convergence. The resulting algorithm can be implemented in a distributed asynchronous manner. The effectiveness of the proposed algorithm is validated by numerical experiments. <|reference_end|>",
"<|reference_start|> Joint Beamforming and Power Control in Coordinated Multicell: Max-Min Duality, Effective Network and Large System Transition: This paper studies joint beamforming and power control in a coordinated multicell downlink system that serves multiple users per cell to maximize the minimum weighted signal-to-interference-plus-noise ratio. The optimal solution and distributed algorithm with geometrically fast convergence rate are derived by employing the nonlinear Perron-Frobenius theory and the multicell network duality. The iterative algorithm, though operating in a distributed manner, still requires instantaneous power update within the coordinated cluster through the backhaul. The backhaul information exchange and message passing may become prohibitive with increasing number of transmit antennas and increasing number of users. In order to derive asymptotically optimal solution, random matrix theory is leveraged to design a distributed algorithm that only requires statistical information. The advantage of our approach is that there is no instantaneous power update through backhaul. Moreover, by using nonlinear Perron-Frobenius theory and random matrix theory, an effective primal network and an effective dual network are proposed to characterize and interpret the asymptotic solution. <|reference_end|>",
"<|reference_start|> FemtoCaching: Wireless Video Content Delivery through Distributed Caching Helpers: Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as \"helpers\"). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of $1-(1-1/d)^d$, where $d$ is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes. <|reference_end|>"
] | [
2,
3,
8,
13
] | {"<|cite_1|>": "ss-1029392", "<|cite_2|>": "ss-1009712", "<|cite_3|>": "ss-1027964", "<|cite_4|>": "ss-767261", "<|multi_cite_5_1|>": "arxiv-12812", "<|multi_cite_5_2|>": "arxiv-14766", "<|multi_cite_5_3|>": "ss-1690648", "<|cite_6|>": "ss-1297522", "<|cite_7|>": "arxiv-42968", "<|multi_cite_8_1|>": "ss-1297522", "<|multi_cite_8_2|>": "arxiv-42968", "<|cite_9|>": "ss-1690649", "<|cite_10|>": "ss-1690650", "<|cite_11|>": "arxiv-24739", "<|cite_12|>": "ss-767261"} |
2302.01857-1 | <|cite_start|> (Reference: A Syntactic Neural Model for General-Purpose Code Generation: We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.) <|cite_end|> <|cite_start|> (Reference: Abstract Syntax Networks for Code Generation and Semantic Parsing: Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering.) <|cite_end|> <|cite_start|> (Reference: TreeGen: A Tree-Based Transformer Architecture for Code Generation: A code generation system generates programming language code based on an input natural language description. State-of-the-art approaches rely on neural networks for code generation. However, these code generators suffer from two problems. One is the long dependency problem, where a code element often depends on another far-away code element. A variable reference, for example, depends on its definition, which may appear quite a few lines before. The other problem is structure modeling, as programs contain rich structural information. In this paper, we propose a novel tree-based neural architecture, TreeGen, for code generation. TreeGen uses the attention mechanism of Transformers to alleviate the long-dependency problem, and introduces a novel AST reader (encoder) to incorporate grammar rules and AST structures into the network. We evaluated TreeGen on a Python benchmark, HearthStone, and two semantic parsing benchmarks, ATIS and GEO. TreeGen outperformed the previous state-of-the-art approach by 4.5 percentage points on HearthStone, and achieved the best accuracy among neural network-based approaches on ATIS (89.1%) and GEO (89.6%). We also conducted an ablation test to better understand each component of our model.) <|cite_end|>leverages encoder-decoder architectures to generate ASTs. Different from DL-based code generation, our technique is designed for program repair and our proposed decoder is novel in architecture and domain-rule distillation. In addition to code generation, some DL-based techniques <|cite_start|> (Reference: Learning Structural Edits via Incremental Tree Transformations: While most neural generative models generate outputs in a single pass, the human creative process is usually one of iterative building and refinement. Recent work has proposed models of editing processes, but these mostly focus on editing sequential data and/or only model a single editing pass. In this paper, we present a generic model for incremental editing of structured data (i.e., "structural edits"). Particularly, we focus on tree-structured data, taking abstract syntax trees of computer programs as our canonical example. Our editor learns to iteratively generate tree edits (e.g., deleting or adding a subtree) and applies them to the partially edited data, thereby the entire editing process can be formulated as consecutive, incremental tree transformations. To show the unique benefits of modeling tree edits directly, we further propose a novel edit encoder for learning to represent edits, as well as an imitation learning method that allows the editor to be more robust. We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches that generate the edited program directly in one pass. Finally, we demonstrate that training our editor to imitate experts and correct its mistakes dynamically can further improve its performance.) <|cite_end|> <|cite_start|> (Reference: Learning to Represent Edits: We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.) <|cite_end|> <|cite_start|> (Reference: A Structural Model for Contextual Code Changes: We address the problem of predicting edit completions based on a learned model that was trained on past edits. Given a code snippet that is partially edited, our goal is to predict a completion of the edit for the rest of the snippet. We refer to this task as the EditCompletion task and present a novel approach for tackling it. The main idea is to directly represent structural edits. This allows us to model the likelihood of the edit itself, rather than learning the likelihood of the edited code. We represent an edit operation as a path in the program's Abstract Syntax Tree (AST), originating from the source of the edit to the target of the edit. Using this representation, we present a powerful and lightweight neural model for the EditCompletion task. We conduct a thorough evaluation, comparing our approach to a variety of representation and modeling approaches that are driven by multiple strong models such as LSTMs, Transformers, and neural CRFs. Our experiments show that our model achieves a 28% relative gain over state-of-the-art sequential models and 2x higher accuracy than syntactic models that learn to generate the edited code, as opposed to modeling the edits directly. Our code, dataset, and trained models are publicly available at https://github.com/tech-srl/c3po/ .) <|cite_end|>generate token-level or AST-level edits for program. Instead of generating edits, our approach directly generates patch code in the AST format via a novel decoder.
\rev{A recent direction of DL-based code generation is applying large language models (LLMs) trained on source code to generate code, such as CodeBert <|cite_start|> (Reference: CodeBERT: A Pre-Trained Model for Programming and Natural Languages: We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both “bimodal” data of NL-PL pairs and “unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NLPL probing.) <|cite_end|>, CodeT5 <|cite_start|> (Reference: CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation: Pre-trained models for Natural Languages (NL) like BERT and GPT have been recently shown to transfer well to Programming Languages (PL) and largely benefit a broad set of code-related tasks. Despite their success, most current methods either rely on an encoder-only (or decoder-only) pre-training that is suboptimal for generation (resp. understanding) tasks or process the code snippet in the same way as NL, neglecting the special characteristics of PL such as token types. We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code. Our code and pre-trained models are released at https: //github.com/salesforce/CodeT5 .) <|cite_end|>, CodeGen <|cite_start|> (Reference: A conversational paradigm for program synthesis: Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called C ODE G EN , on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model C ODE G EN (with up to 16B parameters trained on TPU-v4) outperforms OpenAI’s Codex on the HumanEval benchmark. We make the training library JAX FORMER including checkpoints available as open source contribution: https:/) <|cite_end|>, InCoder <|cite_start|> (Reference: InCoder: A Generative Model for Code Infilling and Synthesis: Code is seldom written in a single left-to-right pass and is instead repeatedly edited and refined. We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) as well as editing (via infilling). InCoder is trained to generate code files from a large corpus of permissively licensed code, where regions of code have been randomly masked and moved to the end of each file, allowing code infilling with bidirectional context. Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming. We find that the ability to condition on bidirectional context substantially improves performance on these tasks, while still performing comparably on standard program synthesis benchmarks in comparison to left-to-right only models pretrained at similar scale. The InCoder models and code are publicly released. https://sites.google.com/view/incoder-code-models) <|cite_end|>, and
Codex <|cite_start|> (Reference: Evaluating Large Language Models Trained on Code: We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.) <|cite_end|>. These LLMs are generic models, while \tool is a customized model that contains a novel three-stage tree decoder and domain-knowledge distillation to fix bugs.} <|paper_end|> | [
"<|reference_start|> A Syntactic Neural Model for General-Purpose Code Generation: We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches. <|reference_end|>",
"<|reference_start|> Abstract Syntax Networks for Code Generation and Semantic Parsing: Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering. <|reference_end|>",
"<|reference_start|> CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation: Pre-trained models for Natural Languages (NL) like BERT and GPT have been recently shown to transfer well to Programming Languages (PL) and largely benefit a broad set of code-related tasks. Despite their success, most current methods either rely on an encoder-only (or decoder-only) pre-training that is suboptimal for generation (resp. understanding) tasks or process the code snippet in the same way as NL, neglecting the special characteristics of PL such as token types. We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers. Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning. Besides, we propose a novel identifier-aware pre-training task that enables the model to distinguish which code tokens are identifiers and to recover them when they are masked. Furthermore, we propose to exploit the user-written code comments with a bimodal dual generation task for better NL-PL alignment. Comprehensive experiments show that CodeT5 significantly outperforms prior methods on understanding tasks such as code defect detection and clone detection, and generation tasks across various directions including PL-NL, NL-PL, and PL-PL. Further analysis reveals that our model can better capture semantic information from code. Our code and pre-trained models are released at https: //github.com/salesforce/CodeT5 . <|reference_end|>",
"<|reference_start|> Evaluating Large Language Models Trained on Code: We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics. <|reference_end|>"
] | [
0,
1,
7,
10
] | {"<|cite_1|>": "ss-1288018", "<|multi_cite_2_1|>": "ss-685462", "<|multi_cite_2_2|>": "ss-1419852", "<|multi_cite_3_1|>": "arxiv-339875", "<|multi_cite_3_2|>": "arxiv-186744", "<|multi_cite_3_3|>": "ss-751579", "<|multi_cite_3_4|>": "ss-682591", "<|multi_cite_3_5|>": "ss-715104", "<|multi_cite_3_6|>": "arxiv-348606", "<|multi_cite_3_7|>": "arxiv-335081", "<|multi_cite_3_8|>": "arxiv-407944", "<|multi_cite_4_1|>": "ss-751579", "<|multi_cite_4_2|>": "ss-682591", "<|multi_cite_4_3|>": "arxiv-348606", "<|multi_cite_4_4|>": "arxiv-339875", "<|multi_cite_5_1|>": "ss-751579", "<|multi_cite_5_2|>": "ss-682591", "<|multi_cite_5_3|>": "arxiv-348606", "<|multi_cite_5_4|>": "arxiv-186744", "<|multi_cite_6_1|>": "ss-682591", "<|multi_cite_6_2|>": "arxiv-348606", "<|multi_cite_7_1|>": "ss-751579", "<|multi_cite_7_2|>": "ss-682591", "<|multi_cite_7_3|>": "arxiv-348606", "<|multi_cite_7_4|>": "arxiv-186744", "<|cite_8|>": "ss-728302", "<|cite_9|>": "ss-1250879", "<|multi_cite_10_1|>": "ss-685462", "<|multi_cite_10_2|>": "ss-1419852", "<|multi_cite_11_2|>": "ss-1286497", "<|multi_cite_12_1|>": "arxiv-196068", "<|multi_cite_12_2|>": "arxiv-165362", "<|multi_cite_13_1|>": "arxiv-104683", "<|multi_cite_13_2|>": "arxiv-179875", "<|multi_cite_13_3|>": "arxiv-142813", "<|multi_cite_14_1|>": "arxiv-339875", "<|multi_cite_14_2|>": "arxiv-186744", "<|multi_cite_14_3|>": "ss-751579", "<|multi_cite_14_4|>": "ss-682591", "<|multi_cite_14_5|>": "ss-715104", "<|multi_cite_14_6|>": "arxiv-348606", "<|multi_cite_14_7|>": "arxiv-342044", "<|multi_cite_15_1|>": "arxiv-186744", "<|multi_cite_15_2|>": "ss-751579", "<|multi_cite_15_3|>": "ss-682591", "<|multi_cite_16_1|>": "arxiv-232339", "<|multi_cite_16_2|>": "arxiv-348606", "<|multi_cite_16_3|>": "ss-715104", "<|multi_cite_17_1|>": "arxiv-348606", "<|multi_cite_17_2|>": "ss-715104", "<|cite_18|>": "arxiv-186744", "<|cite_19|>": "ss-751579", "<|cite_20|>": "ss-682591", "<|cite_21|>": "arxiv-339875", "<|cite_22|>": "ss-1250877", "<|cite_23|>": "arxiv-174528", "<|cite_24|>": "arxiv-339875", "<|cite_25|>": "arxiv-94386", "<|multi_cite_26_1|>": "arxiv-90034", "<|multi_cite_26_2|>": "arxiv-120978", "<|multi_cite_26_4|>": "arxiv-122498", "<|multi_cite_26_5|>": "ss-1218986", "<|multi_cite_27_1|>": "arxiv-317752", "<|multi_cite_27_2|>": "arxiv-178387", "<|multi_cite_27_3|>": "arxiv-267877", "<|cite_28|>": "ss-886618", "<|cite_29|>": "arxiv-364368", "<|cite_30|>": "ss-768849", "<|cite_31|>": "arxiv-412738", "<|cite_32|>": "arxiv-353610"} |
1604.03034 | <|paper_start|> Title: M3: Scaling Up Machine Learning via Memory Mapping
Abstract: M3: Scaling Up Machine Learning via Memory Mapping: To process data that do not fit in RAM, conventional wisdom would suggest using distributed approaches. However, recent research has demonstrated virtual memory's strong potential in scaling up graph mining algorithms on a single machine. We propose to use a similar approach for general machine learning. We contribute: (1) our latest finding that memory mapping is also a feasible technique for scaling up general machine learning algorithms like logistic regression and k-means, when data fits in or exceeds RAM (we tested datasets up to 190GB); (2) an approach, called M3, that enables existing machine learning algorithms to work with out-of-core datasets through memory mapping, achieving a speed that is significantly faster than a 4-instance Spark cluster, and comparable to an 8-instance cluster.
Introduction
Leveraging virtual memory to extend algorithms for out-of-core data has received increasing attention in data analytics communities.
Recent research demonstrated virtual memory's strong potential to scale up graph algorithms on a single PC <|cite_start|> (Reference: Scalability! But at what {COST}?: We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system's scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads.
We survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations.) <|cite_end|> <|cite_start|> (Reference: mmap: Fast billion-scale graph computation on a pc via memory mapping: Graph computation approaches such as GraphChi and TurboGraph recently demonstrated that a single PC can perform efficient computation on billion-node graphs. To achieve high speed and scalability, they often need sophisticated data structures and memory management strategies. We propose a minimalist approach that forgoes such requirements, by leveraging the fundamental memory mapping (MMap) capability found on operating systems. We contribute: (1) a new insight that MMap is a viable technique for creating fast and scalable graph algorithms that surpasses some of the best techniques; (2) the design and implementation of popular graph algorithms for billion-scale graphs with little code, thanks to memory mapping; (3) extensive experiments on real graphs, including the 6.6 billion edge Yahoo Web graph, and show that this new approach is significantly faster or comparable to the highly-optimized methods (e.g., 9.5X faster than GraphChi for computing PageRank on 1.47B edge Twitter graph). We believe our work provides a new direction in the design and development of scalable algorithms. Our packaged code is available at http://poloclub.gatech.edu/mmap/.) <|cite_end|>.
Available on almost all modern platforms, virtual memory based approaches are straight forward to implement and to use, and can handle graphs with as many as 6 billion edges <|cite_start|> (Reference: mmap: Fast billion-scale graph computation on a pc via memory mapping: Graph computation approaches such as GraphChi and TurboGraph recently demonstrated that a single PC can perform efficient computation on billion-node graphs. To achieve high speed and scalability, they often need sophisticated data structures and memory management strategies. We propose a minimalist approach that forgoes such requirements, by leveraging the fundamental memory mapping (MMap) capability found on operating systems. We contribute: (1) a new insight that MMap is a viable technique for creating fast and scalable graph algorithms that surpasses some of the best techniques; (2) the design and implementation of popular graph algorithms for billion-scale graphs with little code, thanks to memory mapping; (3) extensive experiments on real graphs, including the 6.6 billion edge Yahoo Web graph, and show that this new approach is significantly faster or comparable to the highly-optimized methods (e.g., 9.5X faster than GraphChi for computing PageRank on 1.47B edge Twitter graph). We believe our work provides a new direction in the design and development of scalable algorithms. Our packaged code is available at http://poloclub.gatech.edu/mmap/.) <|cite_end|>.
Some single-thread implementations on a PC can even outperform popular distributed systems like Spark (128 cores) <|cite_start|> (Reference: Scalability! But at what {COST}?: We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system's scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads.
We survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations.) <|cite_end|>.
Memory mapping a dataset into a machine's virtual memory space allows the dataset to be treated identically as an in-memory dataset.
The algorithm developer no longer needs to explicitly determine how to partition the (large) dataset, nor manage which partitions should be loaded into RAM, or unloaded from it.
The OS performs similar actions on the developer's behalf, through paging the dataset in and out of RAM, via highly optimized OS-level operations.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{compare}
\caption{a: \proj{} runtime scales linearly with data size, when data fits in or exceeds RAM.
b: \proj{}'s speed (one PC) comparable to 8-instance Spark (orange), and significantly faster than 4-instance Spark (light orange).}
\label{fig:scalability}
\end{figure} <|paper_end|> | [
"<|reference_start|> Scalability! But at what {COST}?: We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system's scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads. \n \nWe survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations. <|reference_end|>",
"<|reference_start|> mmap: Fast billion-scale graph computation on a pc via memory mapping: Graph computation approaches such as GraphChi and TurboGraph recently demonstrated that a single PC can perform efficient computation on billion-node graphs. To achieve high speed and scalability, they often need sophisticated data structures and memory management strategies. We propose a minimalist approach that forgoes such requirements, by leveraging the fundamental memory mapping (MMap) capability found on operating systems. We contribute: (1) a new insight that MMap is a viable technique for creating fast and scalable graph algorithms that surpasses some of the best techniques; (2) the design and implementation of popular graph algorithms for billion-scale graphs with little code, thanks to memory mapping; (3) extensive experiments on real graphs, including the 6.6 billion edge Yahoo Web graph, and show that this new approach is significantly faster or comparable to the highly-optimized methods (e.g., 9.5X faster than GraphChi for computing PageRank on 1.47B edge Twitter graph). We believe our work provides a new direction in the design and development of scalable algorithms. Our packaged code is available at http://poloclub.gatech.edu/mmap/. <|reference_end|>",
"<|reference_start|> mmap: Fast billion-scale graph computation on a pc via memory mapping: Graph computation approaches such as GraphChi and TurboGraph recently demonstrated that a single PC can perform efficient computation on billion-node graphs. To achieve high speed and scalability, they often need sophisticated data structures and memory management strategies. We propose a minimalist approach that forgoes such requirements, by leveraging the fundamental memory mapping (MMap) capability found on operating systems. We contribute: (1) a new insight that MMap is a viable technique for creating fast and scalable graph algorithms that surpasses some of the best techniques; (2) the design and implementation of popular graph algorithms for billion-scale graphs with little code, thanks to memory mapping; (3) extensive experiments on real graphs, including the 6.6 billion edge Yahoo Web graph, and show that this new approach is significantly faster or comparable to the highly-optimized methods (e.g., 9.5X faster than GraphChi for computing PageRank on 1.47B edge Twitter graph). We believe our work provides a new direction in the design and development of scalable algorithms. Our packaged code is available at http://poloclub.gatech.edu/mmap/. <|reference_end|>",
"<|reference_start|> Scalability! But at what {COST}?: We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system's scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads. \n \nWe survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations. <|reference_end|>"
] | [
0,
1,
2,
3
] | {"<|multi_cite_1_1|>": "ss-1066029", "<|multi_cite_1_2|>": "ss-2007566", "<|cite_2|>": "ss-2007566", "<|cite_3|>": "ss-1066029"} |
2212.04255 | <|paper_start|> Title: Fruit Quality Assessment with Densely Connected Convolutional Neural Network
Abstract: Fruit Quality Assessment with Densely Connected Convolutional Neural Network: Accurate recognition of food items along with quality assessment is of paramount importance in the agricultural industry. Such automated systems can speed up the wheel of the food processing sector and save tons of manual labor. In this connection, the recent advancement of Deep learning-based architectures has introduced a wide variety of solutions offering remarkable performance in several classification tasks. In this work, we have exploited the concept of Densely Connected Convolutional Neural Networks (DenseNets) for fruit quality assessment. The feature propagation towards the deeper layers has enabled the network to tackle the vanishing gradient problems and ensured the reuse of features to learn meaningful insights. Evaluating on a dataset of 19,526 images containing six fruits having three quality grades for each, the proposed pipeline achieved a remarkable accuracy of 99.67%. The robustness of the model was further tested for fruit classification and quality assessment tasks where the model produced a similar performance, which makes it suitable for real-life applications.
Introduction
Fruit classification has emerged as an important aspect in the domain of agriculture, machine learning, and image classification in recent years. Fast and accurate fruit classification is a major challenge to consider to increase efficiency in the farming sector <|cite_start|> (Reference: Fruits and vegetables quality evaluation using computer vision: A review: ) <|cite_end|>.
Deep learning models, which are basically based on artificial neural networks have shown formidable performances in fruit detection and classification tasks <|cite_start|> (Reference: Recent advancements in fruit detection and classification using deep learning techniques: Recent advances in computer vision have allowed broad applications in every area of life, and agriculture is not left out. For the agri-food industry, the use of advanced technology is essential. Owing to deep learning’s capability to learn robust features from images, it has witnessed enormous application in several fields. Fruit detection and classification remains challenging due to the form, color, and texture of different fruit species. While studying the impact of computer vision on fruit detection and classification, we pointed out that till 2018 many conventional machine learning methods were utilized while a few methods exploited the application of deep learning methods for fruit detection and classification. This has prompted us to pursue an extensive study on surveying and implementing deep learning models for fruit detection and classification. In this article, we intensively discussed the datasets used by many scholars, the practical descriptors, the model’s implementation, and the challenges of using deep learning to detect and categorize fruits. Lastly, we summarized the results of different deep learning methods applied in previous studies for the purpose of fruit detection and classification. This review covers the study of recently published articles that utilized deep learning models for fruit identification and classification. Additionally, we also implemented from scratch a deep learning model for fruit classification using the popular dataset “Fruit 360” to make it easier for beginner researchers in the field of agriculture to understand the role of deep learning in the agriculture domain.) <|cite_end|>.
The potential and prospect of Deep Learning (DL) were further explored using Convolutional Neural Network (CNN) based architectures <|cite_start|> (Reference: Determining the freshness of fruits in the food industry by image classification using transfer learning: ) <|cite_end|>. Albeit having substantial prospects, the main challenges that fruit identification research faces involve challenges pertaining to the irregularity of form, size, and variability in color <|cite_start|> (Reference: Recent advancements in fruit detection and classification using deep learning techniques: Recent advances in computer vision have allowed broad applications in every area of life, and agriculture is not left out. For the agri-food industry, the use of advanced technology is essential. Owing to deep learning’s capability to learn robust features from images, it has witnessed enormous application in several fields. Fruit detection and classification remains challenging due to the form, color, and texture of different fruit species. While studying the impact of computer vision on fruit detection and classification, we pointed out that till 2018 many conventional machine learning methods were utilized while a few methods exploited the application of deep learning methods for fruit detection and classification. This has prompted us to pursue an extensive study on surveying and implementing deep learning models for fruit detection and classification. In this article, we intensively discussed the datasets used by many scholars, the practical descriptors, the model’s implementation, and the challenges of using deep learning to detect and categorize fruits. Lastly, we summarized the results of different deep learning methods applied in previous studies for the purpose of fruit detection and classification. This review covers the study of recently published articles that utilized deep learning models for fruit identification and classification. Additionally, we also implemented from scratch a deep learning model for fruit classification using the popular dataset “Fruit 360” to make it easier for beginner researchers in the field of agriculture to understand the role of deep learning in the agriculture domain.) <|cite_end|>.
Classification of fresh and damaged fruits was explored through the means of CNN by Kumar \etal <|cite_start|> (Reference: A Novel Model to Detect and Classify Fresh and Damaged Fruits to Reduce
Food Waste Using a Deep Learning Technique: Due to a lack of efficient measures for dealing with food waste at many levels, including food supply chains, homes, and restaurants, the world’s food supply is shrinking at an alarming pace. In both homes and restaurants, overcooking and other factors are to be blamed for the majority of food that is wasted. Families are the primary source of food waste, and we sought to reduce this by identifying fresh and damaged food. In agriculture, the detection of rotting fruits becomes crucial. Despite the fact that people routinely classify healthy and rotten fruits, fruit growers find it ineffective. In contrast to humans, robots do not grow tired from doing the same thing again and again. Because of this, finding faults in fruits is a declared objective of the agricultural business in order to save labour, waste, manufacturing costs, and time spent on the process. An infected apple may infect a healthy one if the defects are not discovered. Food waste is more likely to occur as a consequence of this, which causes several problems. Input images are used to identify healthy and deteriorated fruits. Various fruits were employed in this study, including apples, bananas, and oranges. For classifying photographs into fresh and decaying fruits, softmax is used, while CNN obtains fruit image properties. A dataset from Kaggle was used to evaluate the suggested model’s performance, and it achieved a 97.14 percent accuracy rate. The suggested CNN model outperforms the current methods in terms of performance.) <|cite_end|>. While the work carried out by Kazi and Panda <|cite_start|> (Reference: Determining the freshness of fruits in the food industry by image classification using transfer learning: ) <|cite_end|> also explored the freshness of fruits, their approach differed by employing the power of transfer learning instead of a vanilla CNN.
Kumar's work <|cite_start|> (Reference: A Novel Model to Detect and Classify Fresh and Damaged Fruits to Reduce
Food Waste Using a Deep Learning Technique: Due to a lack of efficient measures for dealing with food waste at many levels, including food supply chains, homes, and restaurants, the world’s food supply is shrinking at an alarming pace. In both homes and restaurants, overcooking and other factors are to be blamed for the majority of food that is wasted. Families are the primary source of food waste, and we sought to reduce this by identifying fresh and damaged food. In agriculture, the detection of rotting fruits becomes crucial. Despite the fact that people routinely classify healthy and rotten fruits, fruit growers find it ineffective. In contrast to humans, robots do not grow tired from doing the same thing again and again. Because of this, finding faults in fruits is a declared objective of the agricultural business in order to save labour, waste, manufacturing costs, and time spent on the process. An infected apple may infect a healthy one if the defects are not discovered. Food waste is more likely to occur as a consequence of this, which causes several problems. Input images are used to identify healthy and deteriorated fruits. Various fruits were employed in this study, including apples, bananas, and oranges. For classifying photographs into fresh and decaying fruits, softmax is used, while CNN obtains fruit image properties. A dataset from Kaggle was used to evaluate the suggested model’s performance, and it achieved a 97.14 percent accuracy rate. The suggested CNN model outperforms the current methods in terms of performance.) <|cite_end|> showed an accuracy of 97.14\% while Kazi \etal <|cite_start|> (Reference: Determining the freshness of fruits in the food industry by image classification using transfer learning: ) <|cite_end|> reported having 99\% accuracy, which implies DL models can perform better through transfer learning and fine-tuning.
Siddiqi \etal <|cite_start|> (Reference: Automated apple defect detection using state-of-the-art object detection techniques: ) <|cite_end|> affirmed the findings reported by Valdez \etal <|cite_start|> (Reference: Apple Defect Detection Using Deep Learning Based Object Detection For Better Post Harvest Handling: The inclusion of Computer Vision and Deep Learning technologies in Agriculture aims to increase the harvest quality, and productivity of farmers. During postharvest, the export market and quality evaluation are affected by assorting of fruits and vegetables. In particular, apples are susceptible to a wide range of defects that can occur during harvesting or/and during the post-harvesting period. This paper aims to help farmers with post-harvest handling by exploring if recent computer vision and deep learning methods such as the YOLOv3 (Redmon & Farhadi (2018)) can help in detecting healthy apples from apples with defects.) <|cite_end|> and reiterated finding better results with SSD
The authors of <|cite_start|> (Reference: Fruit Freshness Grading Using Deep Learning: ) <|cite_end|> investigated several networks and proposed a CNN-YOLO induced regression network for fruit quality detection on six types of fruits that aligned with the work carried out using YOLO in <|cite_start|> (Reference: Apple Defect Detection Using Deep Learning Based Object Detection For Better Post Harvest Handling: The inclusion of Computer Vision and Deep Learning technologies in Agriculture aims to increase the harvest quality, and productivity of farmers. During postharvest, the export market and quality evaluation are affected by assorting of fruits and vegetables. In particular, apples are susceptible to a wide range of defects that can occur during harvesting or/and during the post-harvesting period. This paper aims to help farmers with post-harvest handling by exploring if recent computer vision and deep learning methods such as the YOLOv3 (Redmon & Farhadi (2018)) can help in detecting healthy apples from apples with defects.) <|cite_end|> <|cite_start|> (Reference: Automated apple defect detection using state-of-the-art object detection techniques: ) <|cite_end|>. Hussain \etal <|cite_start|> (Reference: A Simple and Efficient Deep Learning-Based Framework for Automatic Fruit
Recognition: Accurate detection and recognition of various kinds of fruits and vegetables by using the artificial intelligence (AI) approach always remain a challenging task due to similarity between various types of fruits and challenging environments such as lighting and background variations. Therefore, developing and exploring an expert system for automatic fruits' recognition is getting more and more important after many successful approaches; however, this technology is still far from being mature. The deep learning-based models have emerged as state-of-the-art techniques for image segmentation and classification and have a lot of promise in challenging domains such as agriculture, where they can deal with the large variability in data better than classical computer vision methods. In this study, we proposed a deep learning-based framework to detect and recognize fruits and vegetables automatically with difficult real-world scenarios. The proposed method might be helpful for the fruit sellers to identify and differentiate various kinds of fruits and vegetables that have similarities. The proposed method has applied deep convolutional neural network (DCNN) to the undertakings of distinguishing natural fruit images of the Gilgit-Baltistan (GB) region as this area is famous for fruits' production in Pakistan as well as in the world. The experimental outcomes demonstrate that the suggested deep learning algorithm has the effective capability of automatically recognizing the fruit with high accuracy of 96%. This high accuracy exhibits that the proposed approach can meet world application requirements.) <|cite_end|> provided a dataset worth 10,000 fruits images and proposed a Deep CNN achieving an accuracy of 96\%.
Meshram \etal <|cite_start|> (Reference: FruitNet: Indian fruits image dataset with quality for machine learning applications: ) <|cite_end|> curated a dataset containing more than 19,526 images of highly popular fruits in India with three quality labels, namely good, bad, and mixed quality. A framework named MNet was proposed in <|cite_start|> (Reference: MNet: A Framework to Reduce Fruit Image Misclassification.: ABSTRACT) <|cite_end|> for reducing fruit misclassification, where the authors curated a dataset having 12,000 images for binary classification and experimented with different state-of-the-art CNN architectures. <|paper_end|> | [
"<|reference_start|> Determining the freshness of fruits in the food industry by image classification using transfer learning: <|reference_end|>",
"<|reference_start|> Automated apple defect detection using state-of-the-art object detection techniques: <|reference_end|>",
"<|reference_start|> Automated apple defect detection using state-of-the-art object detection techniques: <|reference_end|>",
"<|reference_start|> A Simple and Efficient Deep Learning-Based Framework for Automatic Fruit\nRecognition: Accurate detection and recognition of various kinds of fruits and vegetables by using the artificial intelligence (AI) approach always remain a challenging task due to similarity between various types of fruits and challenging environments such as lighting and background variations. Therefore, developing and exploring an expert system for automatic fruits' recognition is getting more and more important after many successful approaches; however, this technology is still far from being mature. The deep learning-based models have emerged as state-of-the-art techniques for image segmentation and classification and have a lot of promise in challenging domains such as agriculture, where they can deal with the large variability in data better than classical computer vision methods. In this study, we proposed a deep learning-based framework to detect and recognize fruits and vegetables automatically with difficult real-world scenarios. The proposed method might be helpful for the fruit sellers to identify and differentiate various kinds of fruits and vegetables that have similarities. The proposed method has applied deep convolutional neural network (DCNN) to the undertakings of distinguishing natural fruit images of the Gilgit-Baltistan (GB) region as this area is famous for fruits' production in Pakistan as well as in the world. The experimental outcomes demonstrate that the suggested deep learning algorithm has the effective capability of automatically recognizing the fruit with high accuracy of 96%. This high accuracy exhibits that the proposed approach can meet world application requirements. <|reference_end|>"
] | [
7,
8,
12,
13
] | {"<|multi_cite_1_1|>": "ss-1227113", "<|cite_2|>": "ss-838970", "<|cite_3|>": "ss-1553129", "<|cite_4|>": "ss-838970", "<|cite_5|>": "ss-1553130", "<|cite_6|>": "ss-1553129", "<|cite_7|>": "ss-1553130", "<|cite_8|>": "ss-1553129", "<|cite_9|>": "ss-1553131", "<|cite_10|>": "arxiv-265101", "<|cite_11|>": "ss-1553132", "<|multi_cite_12_1|>": "arxiv-265101", "<|multi_cite_12_2|>": "ss-1553131", "<|cite_13|>": "ss-1553133", "<|cite_14|>": "ss-1553134", "<|cite_15|>": "ss-1553135"} |
2011.05710 | <|paper_start|> Title: Nondeterministic functional transducer inference algorithm
Abstract: Nondeterministic functional transducer inference algorithm: The purpose of this paper is to present an algorithm for inferring nondeterministic functional transducers. It has a lot in common with other well known algorithms such has RPNI and OSTIA. Indeed we will argue that this algorithm is a generalisation of both of them. Functional transducers are all those nondeterministic transducers whose regular relation is a function. Epsilon transitions as well as subsequential output can be erased for such machines, with the exception of output for empty string being lost. Learning partial functional transducers from negative examples is equivalent to learning total from positive-only data.
Introduction
\IAENGPARstart{L}{earning}
of nondeterministic automata has always been a topic of great interest, although not many positive results were achieved. Most of research focused on weighted automata <|cite_start|> (Reference: Handbook of Weighted Automata: ) <|cite_end|> and probabilistic machines <|cite_start|> (Reference: {Weighted Finite-State Transducers in Speech
Recognition: We survey the use of weighted finite-state transducers (WFSTs) in speech recognition. We show that WFSTs provide a common and natural representation for hidden Markov models (HMMs), context-dependency, pronunciation dictionaries, grammars, and alternative recognition outputs. Furthermore, general transducer operations combine these representations flexibly and efficiently. Weighted determinization and minimization algorithms optimize their time and space requirements, and a weight pushing algorithm distributes the weights along the paths of a weighted transducer optimally for speech recognition. As an example, we describe a North American Business News (NAB) recognition system built using these techniques that combines the HMMs, full cross-word triphones, a lexicon of 40 000 words, and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real-time on a very simple decoder. In another example, we show that the same techniques can be used to optimize lattices for second-pass recognition. In a third example, we show how general automata operations can be used to assemble lattices from different recognizers to improve recognition performance.) <|cite_end|> <|cite_start|> (Reference: Weighted Finite-State Transducer Algorithms. An Overview: ) <|cite_end|>. Algorithms like APTI <|cite_start|> (Reference: Actively Learning Probabilistic Subsequential Transducers: In this paper we investigate learning of probabilistic subsequential transducers in an active learning environment. In our learning algorithm the learner interacts with an oracle by asking probabilistic queries on the observed data. We prove our algorithm in an identification in the limit model. We also provide experimental evidence to show the correctness and to analyze the learnability of the proposed algorithm.) <|cite_end|> allowed for learning transducers from distribution. Some attempts at generalising non-probabilistic machines were also made, such as the semi-deterministic transducers <|cite_start|> (Reference: A Canonical Semi-Deterministic Transducer: We prove the existence of a canonical form for semi-deterministic transducers with incomparable sets of output strings. Based on this, we develop an algorithm which learns semi-deterministic transducers given access to translation queries. We also prove that there is no learning algorithm for semi-deterministic transducers that uses only domain knowledge.) <|cite_end|>. More results <|cite_start|> (Reference: Active learning of nondeterministic finite state machines: We consider the problem of learning nondeterministic finite state machines (NFSMs) from systems where their internal structures are implicit and nondeterministic. Recently, an algorithm for inferring observable NFSMs (ONFSMs), which are the potentially learnable subclass of NFSMs, has been proposed based on the hypothesis that the complete testing assumption is satisfied. According to this assumption, with an input sequence (query), the complete set of all possible output sequences is given by the so-called Teacher, so the number of times for asking the same query is not taken into account in the algorithm. In this paper, we propose , a refined ONFSM learning algorithm that considers the amount for repeating the same query as one parameter. Unlike the previous work, our approach does not require all possible output sequences in one answer. Instead, it tries to observe the possible output sequences by asking the same query many times to the Teacher. We have proved that can infer the corresponding ONFSMs of the unknown systems when the number of tries for the same query is adequate to guarantee the complete testing assumption. Moreover, the proof shows that our algorithm will eventually terminate no matter whether the assumption is fulfilled or not. We also present the theoretical time complexity analysis of . In addition, experimental results demonstrate the practical efficiency of our approach.) <|cite_end|> were obtained by using active learning and queries. Relatively few research has been done that would attempt to learn nondeterministic automata from text only. In the general case it can be proven that such a task is impossible. Only so far known positive results were for algorithms like OSTIA <|cite_start|> (Reference: Learning subsequential transducers for pattern recognition interpretation tasks: A formalization of the transducer learning problem and an effective and efficient method for the inductive learning of an important class of transducers, the class of subsequential transducers, are presented. The capabilities of subsequential transductions are illustrated through a series of experiments that also show the high effectiveness of the proposed learning method in obtaining very accurate and compact transducers for the corresponding tasks. >) <|cite_end|>, RPNI <|cite_start|> (Reference: INFERRING REGULAR LANGUAGES IN POLYNOMIAL UPDATED TIME: ) <|cite_end|> and its derivatives, but they assumed determinism. The algorithm in this paper presents a generalisation of the two previous algorithms that relaxes assumption of determinism. Here we only assume the transducer to be functional <|cite_start|> (Reference: Multitape automata and finite state transducers with lexicographic weights: Finite state transducers, multitape automata and weighted automata have a lot in common. By studying their universal foundations, one can discover some new insights into all of them. The main result presented here is the introduction of lexicographic finite state transducers, that could be seen as intermediate model between multitape automata and weighted transducers. Their most significant advantage is being equivalent, but often exponentially smaller than even smallest nondeterministic automata without weights. Lexicographic transducers were discovered by taking inspiration from Eilenberg's algebraic approach to automata and Solomonoff's treatment of a priori probability. Therefore, a quick and concise survey of those topics is presented, prior to introducing lexicographic transducers.) <|cite_end|> and locally prefix-preserving. <|paper_end|> | [
"<|reference_start|> Handbook of Weighted Automata: <|reference_end|>",
"<|reference_start|> Weighted Finite-State Transducer Algorithms. An Overview: <|reference_end|>",
"<|reference_start|> Active learning of nondeterministic finite state machines: We consider the problem of learning nondeterministic finite state machines (NFSMs) from systems where their internal structures are implicit and nondeterministic. Recently, an algorithm for inferring observable NFSMs (ONFSMs), which are the potentially learnable subclass of NFSMs, has been proposed based on the hypothesis that the complete testing assumption is satisfied. According to this assumption, with an input sequence (query), the complete set of all possible output sequences is given by the so-called Teacher, so the number of times for asking the same query is not taken into account in the algorithm. In this paper, we propose , a refined ONFSM learning algorithm that considers the amount for repeating the same query as one parameter. Unlike the previous work, our approach does not require all possible output sequences in one answer. Instead, it tries to observe the possible output sequences by asking the same query many times to the Teacher. We have proved that can infer the corresponding ONFSMs of the unknown systems when the number of tries for the same query is adequate to guarantee the complete testing assumption. Moreover, the proof shows that our algorithm will eventually terminate no matter whether the assumption is fulfilled or not. We also present the theoretical time complexity analysis of . In addition, experimental results demonstrate the practical efficiency of our approach. <|reference_end|>",
"<|reference_start|> INFERRING REGULAR LANGUAGES IN POLYNOMIAL UPDATED TIME: <|reference_end|>"
] | [
0,
2,
5,
7
] | {"<|cite_1|>": "ss-1541355", "<|cite_2|>": "ss-885614", "<|cite_3|>": "ss-1131302", "<|cite_4|>": "ss-1422698", "<|cite_5|>": "arxiv-60677", "<|cite_6|>": "ss-1309559", "<|cite_7|>": "ss-1012621", "<|cite_8|>": "ss-1309560", "<|cite_10|>": "arxiv-280769"} |
1909.01440 | <|paper_start|> Title: LCA: Loss Change Allocation for Neural Network Training
Abstract: LCA: Loss Change Allocation for Neural Network Training: Neural networks enjoy widespread use, but many aspects of their training, representation, and operation are poorly understood. In particular, our view into the training process is limited, with a single scalar loss being the most common viewport into this high-dimensional, dynamic process. We propose a new window into training called Loss Change Allocation (LCA), in which credit for changes to the network loss is conservatively partitioned to the parameters. This measurement is accomplished by decomposing the components of an approximate path integral along the training trajectory using a Runge-Kutta integrator. This rich view shows which parameters are responsible for decreasing or increasing the loss during training, or which parameters "help" or "hurt" the network's learning, respectively. LCA may be summed over training iterations and/or over neurons, channels, or layers for increasingly coarse views. This new measurement device produces several insights into training. (1) We find that barely over 50% of parameters help during any given iteration. (2) Some entire layers hurt overall, moving on average against the training gradient, a phenomenon we hypothesize may be due to phase lag in an oscillatory training process. (3) Finally, increments in learning proceed in a synchronized manner across layers, often peaking on identical iterations.
Introduction
\seclabel{introduction}
\vspace*{-0.5em}
In the common stochastic gradient descent (SGD) training setup, a parameterized model is iteratively updated using gradients computed from mini-batches of data chosen from some training set.
Unfortunately, our view into the high-dimensional, dynamic training process is often limited to watching a scalar loss quantity decrease over time.
There has been much research attempting to understand neural network training, with some work studying
geometric properties of the objective function <|cite_start|> (Reference: Qualitatively characterizing neural network optimization problems: Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.) <|cite_end|> <|cite_start|> (Reference: Visualizing the Loss Landscape of Neural Nets: Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.) <|cite_end|> <|cite_start|> (Reference: No bad local minima: Data independent training error guarantees for multilayer neural networks: We use smoothed analysis techniques to provide guarantees on the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions, quadratic loss and a single output, under mild over-parametrization. We prove that for a MNN with one hidden layer, the training error is zero at every differentiable local minimum, for almost every dataset and dropout-like noise realization. We then extend these results to the case of more than one hidden layer. Our theoretical guarantees assume essentially nothing on the training data, and are verified numerically. These results suggest why the highly non-convex loss of such MNNs can be easily optimized using local updates (e.g., stochastic gradient descent), as observed empirically.) <|cite_end|> <|cite_start|> (Reference: On the Quality of the Initial Basin in Overspecified Neural Networks: Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the \emph{geometric} structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger ("overspecified") networks, which accords with some recent empirical and theoretical observations.) <|cite_end|> <|cite_start|> (Reference: The loss surface of deep and wide neural networks: While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal.) <|cite_end|>,
properties of whole networks and individual layers at convergence <|cite_start|> (Reference: The Loss Surfaces of Multilayer Networks: We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between large- and small-size networks where for the latter poor quality local minima have non-zero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.) <|cite_end|> <|cite_start|> (Reference: Qualitatively characterizing neural network optimization problems: Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.) <|cite_end|> <|cite_start|> (Reference: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima: The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say $32$-$512$ data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.) <|cite_end|> <|cite_start|> (Reference: Are All Layers Created Equal?: Understanding deep neural networks is a major research objective with notable experimental and theoretical attention in recent years. The practical success of excessively large networks underscores the need for better theoretical analyses and justifications. In this paper we focus on layer-wise functional structure and behavior in overparameterized deep models. To do so, we study empirically the layers' robustness to post-training re-initialization and re-randomization of the parameters. We provide experimental results which give evidence for the heterogeneity of layers. Morally, layers of large deep neural networks can be categorized as either "robust" or "critical". Resetting the robust layers to their initial values does not result in adverse decline in performance. In many cases, robust layers hardly change throughout training. In contrast, re-initializing critical layers vastly degrades the performance of the network with test error essentially dropping to random guesses. Our study provides further evidence that mere parameter counting or norm calculations are too coarse in studying generalization of deep models, and "flatness" and robustness analysis of trained models need to be examined while taking into account the respective network architectures.) <|cite_end|>,
and
neural network training from an optimization perspective <|cite_start|> (Reference: On the Importance of Initialization and Momentum in Deep Learning: Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned.
Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods.) <|cite_end|> <|cite_start|> (Reference: The Loss Surfaces of Multilayer Networks: We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between large- and small-size networks where for the latter poor quality local minima have non-zero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.) <|cite_end|> <|cite_start|> (Reference: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization: A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.) <|cite_end|> <|cite_start|> (Reference: Optimization Methods for Large-Scale Machine Learning: This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.) <|cite_end|> <|cite_start|> (Reference: Measuring the Intrinsic Dimension of Objective Landscapes: Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.) <|cite_end|>.
This body of work in aggregate provides rich insight into the loss landscape arising from typical combinations of neural network architectures and datasets.
Literature on the dynamics of the training process itself is more sparse, but a few salient works examine
the learning phase through the diagonal of the Hessian, mutual information between input and output, and other measures <|cite_start|> (Reference: Critical Learning Periods in Deep Neural Networks: Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of "Information Plasticity". Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.) <|cite_end|> <|cite_start|> (Reference: Opening the Black Box of Deep Neural Networks via Information: Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the \textit{Information Plane}; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on {\emph compression} of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.) <|cite_end|> <|cite_start|> (Reference: On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length: Stochastic Gradient Descent (SGD) based training of neural networks with a large learning rate or a small batch-size typically ends in well-generalizing, flat regions of the weight space, as indicated by small eigenvalues of the Hessian of the training loss. However, the curvature along the SGD trajectory is poorly understood. An empirical investigation shows that initially SGD visits increasingly sharp regions, reaching a maximum sharpness determined by both the learning rate and the batch-size of SGD. When studying the SGD dynamics in relation to the sharpest directions in this initial phase, we find that the SGD step is large compared to the curvature and commonly fails to minimize the loss along the sharpest directions. Furthermore, using a reduced learning rate along these directions can improve training speed while leading to both sharper and better generalizing solutions compared to vanilla SGD. In summary, our analysis of the dynamics of SGD in the subspace of the sharpest directions shows that they influence the regions that SGD steers to (where larger learning rate or smaller batch size result in wider regions visited), the overall training speed, and the generalization ability of the final model.) <|cite_end|>.
In this paper we propose a simple approach to inspecting training in progress by decomposing changes in the overall network loss into a per-parameter \emph{Loss Change Allocation} or \emph{LCA}.
The procedure for computing LCA is straightforward, but to our knowledge it has not previously been employed for investigating network training.
We begin by defining this measure in more detail, and then apply it to reveal several interesting properties of neural network training. Our contributions are as follows:
\begin{enumerate}
\item We define the Loss Change Allocation as a per-parameter, per-iteration decomposition of changes to the overall network loss (\secref{approach}). Exploring network training with this measurement tool uncovers the following insights.
\item Learning is very noisy, with only slightly over half of parameters helping to reduce loss on any given iteration (\secref{noise}).
\item Some \emph{entire layers} consistently drift in the wrong direction during training, on average moving \emph{against} the gradient.
We propose and test an explanation that these layers are slightly out of phase, lagging behind other layers during training (\secref{hurtinglayers}).
\item We contribute new evidence to
suggest that the learning progress is, on a microscopic level, \emph{synchronized} across layers, with small peaks of learning often occurring at the same iteration for all layers (\secref{synchronized}).
\end{enumerate}
\vspace*{-1em} <|paper_end|> | [
"<|reference_start|> The loss surface of deep and wide neural networks: While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal. <|reference_end|>",
"<|reference_start|> On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima: The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say $32$-$512$ data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap. <|reference_end|>",
"<|reference_start|> Opening the Black Box of Deep Neural Networks via Information: Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the \\textit{Information Plane}; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on {\\emph compression} of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer. <|reference_end|>",
"<|reference_start|> On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length: Stochastic Gradient Descent (SGD) based training of neural networks with a large learning rate or a small batch-size typically ends in well-generalizing, flat regions of the weight space, as indicated by small eigenvalues of the Hessian of the training loss. However, the curvature along the SGD trajectory is poorly understood. An empirical investigation shows that initially SGD visits increasingly sharp regions, reaching a maximum sharpness determined by both the learning rate and the batch-size of SGD. When studying the SGD dynamics in relation to the sharpest directions in this initial phase, we find that the SGD step is large compared to the curvature and commonly fails to minimize the loss along the sharpest directions. Furthermore, using a reduced learning rate along these directions can improve training speed while leading to both sharper and better generalizing solutions compared to vanilla SGD. In summary, our analysis of the dynamics of SGD in the subspace of the sharpest directions shows that they influence the regions that SGD steers to (where larger learning rate or smaller batch size result in wider regions visited), the overall training speed, and the generalization ability of the final model. <|reference_end|>"
] | [
4,
7,
15,
16
] | {"<|multi_cite_1_1|>": "arxiv-70541", "<|multi_cite_1_2|>": "arxiv-144175", "<|multi_cite_1_3|>": "arxiv-98756", "<|multi_cite_1_4|>": "arxiv-87126", "<|multi_cite_1_5|>": "arxiv-122625", "<|multi_cite_2_1|>": "arxiv-69484", "<|multi_cite_2_2|>": "arxiv-70541", "<|multi_cite_2_3|>": "arxiv-105889", "<|multi_cite_2_4|>": "arxiv-190454", "<|multi_cite_3_1|>": "ss-1369073", "<|multi_cite_3_2|>": "arxiv-69484", "<|multi_cite_3_3|>": "arxiv-62045", "<|multi_cite_3_4|>": "arxiv-100186", "<|multi_cite_3_5|>": "arxiv-156107", "<|multi_cite_4_1|>": "arxiv-141093", "<|multi_cite_4_2|>": "arxiv-118029", "<|multi_cite_4_3|>": "arxiv-165799"} |
1506.06715-0 | <|paper_start|> Title: Randomized Composable Core-sets for Distributed Submodular Maximization
Abstract: Randomized Composable Core-sets for Distributed Submodular Maximization: An effective technique for solving optimization problems over massive data sets is to partition the data into smaller pieces, solve the problem on each piece and compute a representative solution from it, and finally obtain a solution inside the union of the representative solutions for all pieces. This technique can be captured via the concept of {\em composable core-sets}, and has been recently applied to solve diversity maximization problems as well as several clustering problems. However, for coverage and submodular maximization problems, impossibility bounds are known for this technique \cite{IMMM14}. In this paper, we focus on efficient construction of a randomized variant of composable core-sets where the above idea is applied on a {\em random clustering} of the data. We employ this technique for the coverage, monotone and non-monotone submodular maximization problems. Our results significantly improve upon the hardness results for non-randomized core-sets, and imply improved results for submodular maximization in a distributed and streaming settings. In summary, we show that a simple greedy algorithm results in a $1/3$-approximate randomized composable core-set for submodular maximization under a cardinality constraint. This is in contrast to a known $O({\log k\over \sqrt{k}})$ impossibility result for (non-randomized) composable core-set. Our result also extends to non-monotone submodular functions, and leads to the first 2-round MapReduce-based constant-factor approximation algorithm with $O(n)$ total communication complexity for either monotone or non-monotone functions. Finally, using an improved analysis technique and a new algorithm $\mathsf{PseudoGreedy}$, we present an improved $0.545$-approximation algorithm for monotone submodular maximization, which is in turn the first MapReduce-based algorithm beating factor $1/2$ in a constant number of rounds.
Introduction
An effective way of processing massive data is to first extract a compact representation of the data
and then perform further processing only on the representation itself.
This approach significantly reduces the cost of processing, communicating and storing the data, as the representation size can be much smaller than the size of the original data set. Typically, the representation provides a smooth tradeoff between its size and the representation accuracy.
Examples of this approach include techniques such as sampling, sketching, (composable) core-sets and mergeable summaries.
Among these techniques, the concept of composable core-sets has been employed in several distributed optimization models such as nearest neighbor search <|cite_start|> (Reference: Diverse Near Neighbor Problem: Motivated by the recent research on diversity-aware search, we investigate the k-diverse near neighbor reporting problem. The problem is defined as follows: given a query point q, report the maximum diversity set S of k points in the ball of radius r around q. The diversity of a set S is measured by the minimum distance between any pair of points in $S$ (the higher, the better). We present two approximation algorithms for the case where the points live in a d-dimensional Hamming space. Our algorithms guarantee query times that are sub-linear in n and only polynomial in the diversity parameter k, as well as the dimension d. For low values of k, our algorithms achieve sub-linear query times even if the number of points within distance r from a query $q$ is linear in $n$. To the best of our knowledge, these are the first known algorithms of this type that offer provable guarantees.) <|cite_end|>, and the streaming and MapReduce models <|cite_start|> (Reference: Distributed clustering on graphs: This paper provides new algorithms for distributed clustering for two popular center-based objectives, k-median and k-means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by [13], we reduce the problem of finding a clustering with low cost to the problem of finding a ‘coreset’ of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. Experiment results on large scale data sets show that this approach outperforms other coreset-based distributed clustering algorithms.) <|cite_end|> <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|> <|cite_start|> (Reference: Streaming submodular maximization: Massive data summarization on the fly: How can one summarize a massive data set "on the fly", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of "representativeness" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1/2-ε approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.) <|cite_end|>.
Roughly speaking, the main idea behind this technique is as follows: First partition the data into smaller parts. Then compute a representative solution, referred to as a {\em core-set}, from each part.
Finally, obtain a solution by solving the optimization problem over the union of core-sets for all parts. While this technique has been successfully applied to diversity maximization and clustering problems <|cite_start|> (Reference: Distributed clustering on graphs: This paper provides new algorithms for distributed clustering for two popular center-based objectives, k-median and k-means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by [13], we reduce the problem of finding a clustering with low cost to the problem of finding a ‘coreset’ of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. Experiment results on large scale data sets show that this approach outperforms other coreset-based distributed clustering algorithms.) <|cite_end|> <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>, for coverage and submodular maximization problems, impossibility bounds are known for this technique <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>.
In this paper, we focus on efficient construction of a randomized variant of composable core-sets where the above idea is applied on a {\em random clustering} of the data. We employ this technique for the coverage, monotone and non-monotone submodular problems. Our results significantly improve upon the hardness results for non-randomized core-sets, and imply improved results for submodular maximization in a distributed and streaming settings. The effectiveness of this technique has been confirmed empirically for several machine learning applications, and our proof provides a theoretical foundation to this idea. Let us first define this concept, and then discuss its applications, and our results.
\subsection{Preliminaries} \label{sec:prelim}
Here, we discuss the formal problem definition, and the distributed model motivating it.
{\bf \noindent Submodular Functions.}
We start by defining submodular functions~\footnote{ While the concepts in this paper can be applied to other set functions, we focus on maximizing submodular set functions.}.
Let $\NN$ be a ground set of items with cardinality $n=\vert \NN\vert$.
Consider a set function $f: 2^{\NN} \rightarrow \RR^+\cup\{0\}$.
We say function $f$ is monotone if for any two subsets $X \subseteq Y \subseteq \NN$, $f(X) \le f(Y)$.
We say function $f$ is submodular if and only if for any two subsets $X \subseteq Y \subseteq \NN$, and an item $x \in \NN \setminus Y$, we have the property of diminishing returns, i.e.,
$$ f(X \cup \{x\}) - f(X) \geq f(Y \cup \{x\}) - f(Y).$$
Given an integer size constraint $k$, we let $f_k$ be $$f_k(S)\defeq \max_{S'\subseteq S, \vert S'\vert \le k} f(S').$$
The submodular maximization problem with a cardinality constraint is as follows:
given a parameter $k$ and a value oracle access to a non-negative submodular function $f: 2^\NN\rightarrow \RR^+\cup\{0\}$, find a subset $S$ of cardinality at most $k$ with the maximum value $f(S)$.
The most common algorithm for solving the above problem is algorithm $\greedy$ which is as follows: start from an empty set $S=\emptyset$, and in $k$ iterations, find an item $x$ with maximum marginal $f$ value for $S$ (i.e., $x=\argmax_{y\in \NN} f(S\cup\{y\}) - f(S)$) and add this item $x$ to $S$. We refer to this algorithm as algorithm $\greedy$ and note that it is a $(1-{1\over e})$-approximation for monotone sumodular maximization problem with a cardinality constraint.
{\bf \noindent Randomized Composable Core-sets.} In this paper, we assume that all $n$ items of $\NN$ do not fit on one machine, and
we need to apply a distributed algorithm to solve submodular maximization problem.
To deal with this issue,
we consider distributing items of $\NN$ into $m$ machines with indices $\{1, \ldots, m\}$,
where each item goes to $C$ randomly chosen machines.
Let $\{T_1, T_2, \ldots, T_m\}$ be subsets of items going to machines $\{1, 2, \ldots, m\}$ respectively. In this case, we say that $\{T_1, T_2, \ldots, T_m\}$ is a {\em random clustering of $\NN$ with multiplicity $C$}, i.e., $\{T_1, T_2, \ldots, T_m\}$ is a family of subsets $T_i\subseteq \NN$, where each item of $\NN$ is assigned to $C$ randomly chosen subsets in this family.
Note that $T_i$'s are not necessarily disjoint subsets of items.
Only the case of $C=1$ corresponds to a random partitioning of items into $m$ disjoint parts. This case
is the most natural way of applying this idea, and is studied in Section~\ref{sec:rand-core-set}. As we see later, higher values of $C$ can help us achieve better approximation factors (See Section~\ref{sec:linear}). We are now ready to formally define randomized composable core-sets.
\begin{definition}
Consider an algorithm $\alg$ that given any subset $T\subseteq \NN$ returns a subset ${\alg}(T) \subseteq T$ with size at most $k'$. Let $\{T_1, T_2, \ldots, T_m\}$ be a
random clustering of $\NN$ to $m$ subsets with multiplicity $C$.
We say that algorithm $\alg$ is an algorithm that implements
{\em $\alpha$-approximate randomized composable core-set of size $k'$ with multiplicity $C$ for $f$ and cardinality constraint parameter $k$}, if,
$$ \E \left [ f_k(\alg(T_1) \cup \ldots \cup \alg(T_m)) \right ] \ge {\alpha} \cdot \E\left [ f_k(T_1 \cup \ldots \cup T_m)\right ],$$
where the expectation is taken over the random choice of $\{T_1, T_2, \ldots, T_m\}$. For brevity, instead of saying that $\alg$ implements a composable core-set, we say that $\alg$ is an $\alpha$-approximate randomized composable core-set.
\end{definition}
For ease of notation, when it is clear from the context, we may drop the term composable, and refer to composable core-sets as core-sets. Throughout this paper, we discuss randomized composable core-sets for the submodular maximization problem with a cardinality constraint $k$.
{\bf \noindent Distributed Approximation Algorithm.}
Note that we can use a randomized $\alpha$-approximate composable core-set algorithm $\alg$ to design the following simple distributed $(1-{1\over e})\alpha$-approximation algorithm for monotone submodular maximization:
\begin{enumerate}
\item In the first phase, following the random clustering $\{T_1, \ldots, T_m\}$ defined above, allocate items in $\NN$ to $m$ machines, i.e., machine $i$ gets the subset $T_i$ of items.
\item Each machine $i$ computes a randomized composable core-set $S_i\subseteq T_i$ of size $k'$, i.e., $S_i=\alg(T_i)$ for each $1\le i\le m$.
\item In the second phase, first collect the union of all core-sets, $U=\cup_{1\le i\le m} S_i$, on one machine. Then apply a {\em post-processing} $(1-{1\over e})$-approximation algorithm (e.g., algorithm $\greedy$) to compute a solution $S$ to the submodular maximization problem over the set $U$. Output $S$.
\end{enumerate}
It follows from the definition of the $\alpha$-approximate randomized composable core-set that the above algorithm is a distributed $(1-{1\over e})\alpha$-approximation algorithm for submodular maximization problem. We refer to this two-phase algorithmic approach as {\em the distributed algorithm}, and the overall approximation factor of the distributed algorithm as the {\em distributed approximation factor}. For all our algorithms in this paper, in addition to presenting an algorithm that achieves an approximation factor $\alpha$ as a randomized composable core-set, we propose a post-processing algorithm for the second phase, and present an improved analysis that achieves much better than $(1-{1\over e})\alpha$-approximation as the distributed approximation factor.
Note that the above algorithm can be implemented in a distributed manner only if $k'$ is small enough such that $mk'$ items can be processed on one machine. In all our results the size of the composable core-set, $k'$, is a function of the cardinality constraint, $k$: In particular, in Section \ref{sec:rand-core-set}, we apply a composable core-set of size $k'=k$. In Section \ref{sec:linear}, we apply a composable core-set of size $k'<4k$, and as a result, achieve a better approximation factor.
We call a core-set, {\em a small-size core-set}, if its size $k'$ is less than $k$ (See Section~\ref{sec:small}). As we will see, the hardness results for small-size core-sets are much stronger than that of core-sets of size $k$ or larger.
{\bf \noindent Non-randomized Composable Core-sets.} The above definition for randomized composable core-sets is introduced in this paper. Prior work <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>define a non-randomized variant of composable core-sets where the above property holds for any (arbitrary) partitioning $\{T_1, T_2, \ldots, T_m\}$ of data into $m$ parts~\footnote{It is not hard to see that for non-randomized composable core-set, the multiplicity parameter $C$ is not relevant.}, i.e., an algorithm $\alg$ as described above is a {\em $\alpha$-approximate (non-randomized) composable core-set of size $k'$ for $f$}, if for any cardinality constraint $k$, and any arbitrary partitioning $\{T_1, T_2, \ldots, T_m\}$ of the items into $m$ sets, we have
$ f_k(\alg(T_1) \cup \ldots \cup \alg(T_m)) \ge {\alpha} \cdot f_k(T_1 \cup \ldots \cup T_m)$.
\subsection{ Applications and Motivations}
An $\alpha$-approximate randomized composable core-set of size $k'=O(k)$ for a problem can be applied in three types of applications <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>\footnote{These results assume $k\le n^{1-\epsilon}$ for a constant $\epsilon$.}:
(i) in distributed computation <|cite_start|> (Reference: MapReduce: Simplified data processing on large clusters: MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.) <|cite_end|>, where it implies an $\alpha$-approximation in one or two rounds of MapReduces using the total communication complexity of $O(n)$,
(ii) in the random-order streaming model, where it implies an $\alpha$-approximation algorithm in one pass using sublinear memory,
(iii) in a class of approximate nearest neighbor search problems, where it implies an $\alpha$-approximation algorithm based on the locally sensitive hashing (under an assumption).
Here, we discuss the application for the MapReduce and Streaming framework, and for details of the approximate nearest neighbor application, we refer to <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>.
We first show how to use a randomized composable core-set of size $O(k)$ to design a distributed algorithm in one or two rounds of MapReduces~\footnote{The straightforward way of applying the ideas will result in two rounds
of MapReduce. However, if we assume that the data is originally sharded randomly and each part is in a single shard, and the memory for each machine is more than the size
of each shard, then it can be implemented via one round of MapReduce computation.} using linear total communication complexity:
Let $m=\sqrt{n/k}$, and let $(T_1, \ldots, T_m)$ be a random partitioning where $T_i$ has $\sqrt{kn}$ items. In the distributed algorithm, we assume that the random partitioning is produced in one round of MapReduce where each of $m$ reducers receives $T_i$ as input, and produces a core-set $S_i$ for the next round. Alternatively, we may assume that the data (or the items) are distributed uniformly at random among machines, or similarity each of $m$ mappers receives $T_i$ as input, and produces a core-set $S_i$ for the reducer. In either case, the produced core-sets are passed to a single
reducer in the first or the second round. The total input to the reducer, i.e., the union of the core-sets,
is of size at most $mk'=O(k)\sqrt{n/k}=O(\sqrt{kn})$.
The solution computed by the reducer for the union of the core-sets is, by definition,
a good approximation to the original problem. It is easy to see that the total communication complexity of this algorithm is $O(n)$, and this computation can be performed in one or two rounds as formally defined in the MapReduce computation model <|cite_start|> (Reference: A model of computation for mapreduce: In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms for processing data on terabyte and petabyte scales. Used daily at companies such as Yahoo!, Google, Amazon, and Facebook, and adopted more recently by several universities, it allows for easy parallelization of data intensive computations over many machines. One key feature of MapReduce that differentiates it from previous models of parallel computation is that it interleaves sequential and parallel computation. We propose a model of efficient computation using the MapReduce paradigm. Since MapReduce is designed for computations over massive data sets, our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, we place very loose restrictions on the computational power of of any individual machine---our model allows each machine to perform sequential computations in time polynomial in the size of the original input.
We compare MapReduce to the PRAM model of computation. We prove a simulation lemma showing that a large class of PRAM algorithms can be efficiently simulated via MapReduce. The strength of MapReduce, however, lies in the fact that it uses both sequential and parallel computation. We demonstrate how algorithms can take advantage of this fact to compute an MST of a dense graph in only two rounds, as opposed to Ω(log(n)) rounds needed in the standard PRAM model. We show how to evaluate a wide class of functions using the MapReduce framework. We conclude by applying this result to show how to compute some basic algorithmic problems such as undirected s-t connectivity in the MapReduce framework.) <|cite_end|>.
Next, we elaborate on the application for a streaming computation model: In the random-order data stream model, a random sequence of $n$ data points needs to be processed ``on-the-fly'' while using only limited storage. An algorithm for a randomized composable core-set can be easily used to obtain an algorithm for this setting <|cite_start|> (Reference: Clustering Data Streams: We study clustering under the data stream model of computation where: given a sequence of points, the objective is to maintain a consistently good clustering of the sequence observed so far, using a small amount of memory and time. The data stream model is relevant to new classes of applications involving massive data sets, such as Web click stream analysis and multimedia data analysis. We give constant-factor approximation algorithms for the k-median problem in the data stream model of computation in a single pass. We also show negative results implying that our algorithms cannot be improved in a certain sense.) <|cite_end|> <|cite_start|> (Reference: Approximating extent measures of points: We present a general technique for approximating various descriptors of the extent of a set <i>P</i> of <i>n</i> points in R<sup><i>d</i></sup> when the dimension <i>d</i> is an arbitrary fixed constant. For a given extent measure μ and a parameter ϵ > 0, it computes in time <i>O</i>(<i>n</i> + 1/ϵ<sup><i>O</i>(1)</sup>) a subset <i>Q</i> ⊆ <i>P</i> of size 1/ϵ<sup><i>O</i>(1)</sup>, with the property that (1 − ϵ)μ(<i>P</i>) ≤ μ(<i>Q</i>) ≤ μ(<i>P</i>). The specific applications of our technique include ϵ-approximation algorithms for (i) computing diameter, width, and smallest bounding box, ball, and cylinder of <i>P</i>, (ii) maintaining all the previous measures for a set of moving points, and (iii) fitting spheres and cylinders through a point set <i>P</i>. Our algorithms are considerably simpler, and faster in many cases, than previously known algorithms.) <|cite_end|>\footnote{The paper <|cite_start|> (Reference: Clustering Data Streams: We study clustering under the data stream model of computation where: given a sequence of points, the objective is to maintain a consistently good clustering of the sequence observed so far, using a small amount of memory and time. The data stream model is relevant to new classes of applications involving massive data sets, such as Web click stream analysis and multimedia data analysis. We give constant-factor approximation algorithms for the k-median problem in the data stream model of computation in a single pass. We also show negative results implying that our algorithms cannot be improved in a certain sense.) <|cite_end|>introduced this approach for the special case of $k$-median clustering. More general formulation of this method with other applications appeared in <|cite_start|> (Reference: Approximating extent measures of points: We present a general technique for approximating various descriptors of the extent of a set <i>P</i> of <i>n</i> points in R<sup><i>d</i></sup> when the dimension <i>d</i> is an arbitrary fixed constant. For a given extent measure μ and a parameter ϵ > 0, it computes in time <i>O</i>(<i>n</i> + 1/ϵ<sup><i>O</i>(1)</sup>) a subset <i>Q</i> ⊆ <i>P</i> of size 1/ϵ<sup><i>O</i>(1)</sup>, with the property that (1 − ϵ)μ(<i>P</i>) ≤ μ(<i>Q</i>) ≤ μ(<i>P</i>). The specific applications of our technique include ϵ-approximation algorithms for (i) computing diameter, width, and smallest bounding box, ball, and cylinder of <i>P</i>, (ii) maintaining all the previous measures for a set of moving points, and (iii) fitting spheres and cylinders through a point set <i>P</i>. Our algorithms are considerably simpler, and faster in many cases, than previously known algorithms.) <|cite_end|>.}.
In particular, if a randomized composable core-set for a given problem has size $k$, we start by dividing the random stream of data into $\sqrt{n/k}$ blocks of size $s=\sqrt{nk}$. This way, each block will be a random subset of items.
The algorithm then proceeds block by block.
Each block is read and stored in the main memory, its core-set is computed and stored, and the block is deleted.
At the end, the algorithm solves the problem for the union of the core-sets. The whole algorithm takes only $O(\sqrt{kn})$ space.
The storage can be reduced further by utilizing more than one level of compression, at the cost of increasing the approximation factor.
Variants of the composable core-set technique have been applied for optimization under MapReduce framework <|cite_start|> (Reference: A model of computation for mapreduce: In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms for processing data on terabyte and petabyte scales. Used daily at companies such as Yahoo!, Google, Amazon, and Facebook, and adopted more recently by several universities, it allows for easy parallelization of data intensive computations over many machines. One key feature of MapReduce that differentiates it from previous models of parallel computation is that it interleaves sequential and parallel computation. We propose a model of efficient computation using the MapReduce paradigm. Since MapReduce is designed for computations over massive data sets, our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, we place very loose restrictions on the computational power of of any individual machine---our model allows each machine to perform sequential computations in time polynomial in the size of the original input.
We compare MapReduce to the PRAM model of computation. We prove a simulation lemma showing that a large class of PRAM algorithms can be efficiently simulated via MapReduce. The strength of MapReduce, however, lies in the fact that it uses both sequential and parallel computation. We demonstrate how algorithms can take advantage of this fact to compute an MST of a dense graph in only two rounds, as opposed to Ω(log(n)) rounds needed in the standard PRAM model. We show how to evaluate a wide class of functions using the MapReduce framework. We conclude by applying this result to show how to compute some basic algorithmic problems such as undirected s-t connectivity in the MapReduce framework.) <|cite_end|> <|cite_start|> (Reference: Filtering: a method for solving graph problems in
{MapReduce: The MapReduce framework is currently the de facto standard used throughout both industry and academia for petabyte scale data analysis. As the input to a typical MapReduce computation is large, one of the key requirements of the framework is that the input cannot be stored on a single machine and must be processed in parallel. In this paper we describe a general algorithmic design technique in the MapReduce framework called filtering. The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Using this approach we give new algorithms in the MapReduce framework for a variety of fundamental graph problems for sufficiently dense graphs. Specifically, we present algorithms for minimum spanning trees, maximal matchings, approximate weighted matchings, approximate vertex and edge covers and minimum cuts. In all of these cases, we parameterize our algorithms by the amount of memory available on the machines allowing us to show tradeoffs between the memory available and the number of MapReduce rounds. For each setting we will show that even if the machines are only given substantially sublinear memory, our algorithms run in a constant number of MapReduce rounds. To demonstrate the practical viability of our algorithms we implement the maximal matching algorithm that lies at the core of our analysis and show that it achieves a significant speedup over the sequential version.) <|cite_end|> <|cite_start|> (Reference: Distributed clustering on graphs: This paper provides new algorithms for distributed clustering for two popular center-based objectives, k-median and k-means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by [13], we reduce the problem of finding a clustering with low cost to the problem of finding a ‘coreset’ of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. Experiment results on large scale data sets show that this approach outperforms other coreset-based distributed clustering algorithms.) <|cite_end|> <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|> <|cite_start|> (Reference: Parallel Algorithms for Geometric Graph Problems: We give algorithms for geometric graph problems in the modern parallel models inspired by MapReduce. For example, for the Minimum Spanning Tree (MST) problem over a set of points in the two-dimensional space, our algorithm computes a $(1+\epsilon)$-approximate MST. Our algorithms work in a constant number of rounds of communication, while using total space and communication proportional to the size of the data (linear space and near linear time algorithms). In contrast, for general graphs, achieving the same result for MST (or even connectivity) remains a challenging open problem, despite drawing significant attention in recent years. We develop a general algorithmic framework that, besides MST, also applies to Earth-Mover Distance (EMD) and the transportation cost problem. Our algorithmic framework has implications beyond the MapReduce model. For example it yields a new algorithm for computing EMD cost in the plane in near-linear time, $n^{1+o_\epsilon(1)}$. We note that while recently Sharathkumar and Agarwal developed a near-linear time algorithm for $(1+\epsilon)$-approximating EMD, our algorithm is fundamentally different, and, for example, also solves the transportation (cost) problem, raised as an open question in their work. Furthermore, our algorithm immediately gives a $(1+\epsilon)$-approximation algorithm with $n^{\delta}$ space in the streaming-with-sorting model with $1/\delta^{O(1)}$ passes. As such, it is tempting to conjecture that the parallel models may also constitute a concrete playground in the quest for efficient algorithms for EMD (and other similar problems) in the vanilla streaming model, a well-known open problem.) <|cite_end|>. However, none of these previous results formally study the difference between randomized and non-randomized variants and in most cases, they employ non-randomized composable core-sets. Indyk et al. <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>observed that the idea of non-randomized composable core-sets cannot be applied to the coverage maximization (or more generally submodular maximization) problems.
In fact, all our hardness results also apply to a class of submodular maximization problems known as the maximum $k$-coverage problems, i.e.,
given a number $k$, and a family of subsets ${\cal A}\subset 2^X$, find a subfamily of $k$ subsets $A_1, \ldots, A_k$ whose union $\cup_{j=1}^k A_j$ is maximized.
Solving max $k$-coverage and submodular maximization in a distributed manner have attracted a significant amount of research over the last few years <|cite_start|> (Reference: Max-cover in Map-Reduce: The NP-hard Max-k-cover problem requires selecting k sets from a collection so as to maximize the size of the union. This classic problem occurs commonly in many settings in web search and advertising. For moderately-sized instances, a greedy algorithm gives an approximation of (1-1/e). However, the greedy algorithm requires updating scores of arbitrary elements after each step, and hence becomes intractable for large datasets.
We give the first max cover algorithm designed for today's large-scale commodity clusters. Our algorithm has provably almost the same approximation as greedy, but runs much faster. Furthermore, it can be easily expressed in the MapReduce programming paradigm, and requires only polylogarithmically many passes over the data. Our experiments on five large problem instances show that our algorithm is practical and can achieve good speedups compared to the sequential greedy algorithm.) <|cite_end|> <|cite_start|> (Reference: Set cover algorithms for very large datasets: The problem of Set Cover - to find the smallest subcollection of sets that covers some universe - is at the heart of many data and analysis tasks. It arises in a wide range of settings, including operations research, machine learning, planning, data quality and data mining. Although finding an optimal solution is NP-hard, the greedy algorithm is widely used, and typically finds solutions that are close to optimal. However, a direct implementation of the greedy approach, which picks the set with the largest number of uncovered items at each step, does not behave well when the input is very large and disk resident. The greedy algorithm must make many random accesses to disk, which are unpredictable and costly in comparison to linear scans. In order to scale Set Cover to large datasets, we provide a new algorithm which finds a solution that is provably close to that of greedy, but which is much more efficient to implement using modern disk technology. Our experiments show a ten-fold improvement in speed on moderately-sized datasets, and an even greater improvement on larger datasets.) <|cite_end|> <|cite_start|> (Reference: Filtering: a method for solving graph problems in
{MapReduce: The MapReduce framework is currently the de facto standard used throughout both industry and academia for petabyte scale data analysis. As the input to a typical MapReduce computation is large, one of the key requirements of the framework is that the input cannot be stored on a single machine and must be processed in parallel. In this paper we describe a general algorithmic design technique in the MapReduce framework called filtering. The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Using this approach we give new algorithms in the MapReduce framework for a variety of fundamental graph problems for sufficiently dense graphs. Specifically, we present algorithms for minimum spanning trees, maximal matchings, approximate weighted matchings, approximate vertex and edge covers and minimum cuts. In all of these cases, we parameterize our algorithms by the amount of memory available on the machines allowing us to show tradeoffs between the memory available and the number of MapReduce rounds. For each setting we will show that even if the machines are only given substantially sublinear memory, our algorithms run in a constant number of MapReduce rounds. To demonstrate the practical viability of our algorithms we implement the maximal matching algorithm that lies at the core of our analysis and show that it achieves a significant speedup over the sequential version.) <|cite_end|> <|cite_start|> (Reference: Parallel and I/O efficient set covering algorithms: This paper presents the design, analysis, and implementation of parallel and sequential I/O-efficient algorithms for set cover, tying together the line of work on parallel set cover and the line of work on efficient set cover algorithms for large, disk-resident instances.
Our contributions are twofold: First, we design and analyze a parallel cache-oblivious set-cover algorithm that offers essentially the same approximation guarantees as the standard greedy algorithm, which has the optimal approximation. Our algorithm is the first efficient external-memory or cache-oblivious algorithm for when neither the sets nor the elements fit in memory, leading to I/O cost (cache complexity) equivalent to sorting in the Cache Oblivious or Parallel Cache Oblivious models. The algorithm also implies elow cache misses on parallel hierarchical memories (again, equivalent to sorting). Second, building on this theory, we engineer variants of the theoretical algorithm optimized for different hardware setups. We provide experimental evaluation showing substantial speedups over existing algorithms without compromising the solution's quality.) <|cite_end|> <|cite_start|> (Reference: Fast greedy algorithms in mapreduce and streaming: Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.) <|cite_end|>. Other than the importance of these problems, one reason for the popularity of this problem in this context is the fact that its approximation algorithm is algorithm $\greedy$ which is naturally sequential and it is hard to parallelize or implement in a distributed manner.
\subsection{Our Contributions}
{
\begin{table*}
\begin{center}
\label{tab:summary-results1}
\begin{tabular}[h]{|c|c|c|c|c|c|}
\hline
Problem & Core-set Size & R/N & U/L & Core-set Approx. Factor & Distributed Approx. \\ \hline \hline
Mon. Submodular Max. <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>& $\mbox{poly}(k)$ & N & U& $O(\frac{\log k}{\sqrt k})$& - \\ \hline
Mon. Submodular Max.* & $k$ & R & L& $1/3$ & 0.27 \\
Non-Mon. Submodular Max.* & $k$ & R & L& $\max({m-1\over 3m},{1\over em})\ge 0.18$ & $\max({1- {1 \over m}\over 2+e},{1\over em}) \ge 0.14$ \\
\hline \hline
Mon. Submodular Max.* & $O(k)$ & R & L& \small{$ 0.585 - O({1\over k})$} & $0.545 - O({1\over k})$ \\
Mon. Submodular Max. & $\mbox{poly}(k)$ & R & U& $1-{1\over e}$ & - \\
\hline
\hline
Mon. Submodular Max. & $k'<k$ & N & UL&$\Theta(\frac{k'}{k})$ & $\Theta(\frac{k'}{k})$ \\
\hline
Mon. Submodular Max. & $k'<k$ & R & UL& $\Theta(\sqrt{\frac{k'}{k}})$&$\Theta(\sqrt{\frac{k'}{k}})$ \\
\hline
\end{tabular}
\end{center}
\caption{This table summarizes our results.
In the column titled "R/N ", "R" corresponds
to the randomized core-set, and "N" corresponds to the non-randomized core-set notion.
In the column titled "U/L ", "U" corresponds
to an upper bound result, and "L" corresponds to a lower bound result. The last column corresponds
to the distributed approximation factor. All these results except the first row are the new results of this paper. Previously, no constant-factor approximation has been proved for a randomized composable core-set for this problem. See Section~\ref{sec:relwork} for comparison to previous approximation algorithms. The rows with a star(*) are our most important results.
}
\end{table*}
}
Our results are summarized in Table 1. As our first result, we prove that a family of efficient algorithms including a variant of algorithm $\greedy$ with a consistent tie-breaking rule leads to an almost $1/3$-approximate randomized composable core-set of size $k$ for any monotone submodular function and cardinality constraint $k$ with multiplicity of $1$ (see Section~\ref{sec:rand-core-set}). This is in contrast to a known $O({\log k\over \sqrt k})$ hardness result for any (non-randomized) composable core-set <|cite_start|> (Reference: Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of "composable core-sets" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of "diversity objective functions", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best "off-line" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist.) <|cite_end|>, and shows the advantage of using the randomization here.
Furthermore, by constructing this randomized core-set and applying algorithm $\greedy$ afterwards, we show a $0.27$ distributed approximation factor for the monotone submodular maximization problem in {\em one or two rounds} of MapReduces with a {\em linear communication complexity}.
Previous results lead to algorithms with either much larger number of rounds of MapReduce <|cite_start|> (Reference: Max-cover in Map-Reduce: The NP-hard Max-k-cover problem requires selecting k sets from a collection so as to maximize the size of the union. This classic problem occurs commonly in many settings in web search and advertising. For moderately-sized instances, a greedy algorithm gives an approximation of (1-1/e). However, the greedy algorithm requires updating scores of arbitrary elements after each step, and hence becomes intractable for large datasets.
We give the first max cover algorithm designed for today's large-scale commodity clusters. Our algorithm has provably almost the same approximation as greedy, but runs much faster. Furthermore, it can be easily expressed in the MapReduce programming paradigm, and requires only polylogarithmically many passes over the data. Our experiments on five large problem instances show that our algorithm is practical and can achieve good speedups compared to the sequential greedy algorithm.) <|cite_end|> <|cite_start|> (Reference: Parallel and I/O efficient set covering algorithms: This paper presents the design, analysis, and implementation of parallel and sequential I/O-efficient algorithms for set cover, tying together the line of work on parallel set cover and the line of work on efficient set cover algorithms for large, disk-resident instances.
Our contributions are twofold: First, we design and analyze a parallel cache-oblivious set-cover algorithm that offers essentially the same approximation guarantees as the standard greedy algorithm, which has the optimal approximation. Our algorithm is the first efficient external-memory or cache-oblivious algorithm for when neither the sets nor the elements fit in memory, leading to I/O cost (cache complexity) equivalent to sorting in the Cache Oblivious or Parallel Cache Oblivious models. The algorithm also implies elow cache misses on parallel hierarchical memories (again, equivalent to sorting). Second, building on this theory, we engineer variants of the theoretical algorithm optimized for different hardware setups. We provide experimental evaluation showing substantial speedups over existing algorithms without compromising the solution's quality.) <|cite_end|>, and/or larger communication complexity <|cite_start|> (Reference: Fast greedy algorithms in mapreduce and streaming: Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.) <|cite_end|>. This improvement is important, since the number of rounds of MapReduce computation and communication complexity are the most important factors in determining the performance of a MapReduced-based algorithm <|cite_start|> (Reference: Connected components in mapreduce and beyond: Computing connected components of a graph lies at the core of many data mining algorithms, and is a fundamental subroutine in graph clustering. This problem is well studied, yet many of the algorithms with good theoretical guarantees perform poorly in practice, especially when faced with graphs with hundreds of billions of edges. In this paper, we design improved algorithms based on traditional MapReduce architecture for large scale data analysis. We also explore the effect of augmenting MapReduce with a distributed hash table (DHT) service. We show that these algorithms have provable theoretical guarantees, and easily outperform previously studied algorithms, sometimes by more than an order of magnitude. In particular, our iterative MapReduce algorithms run 3 to 15 times faster than the best previously studied algorithms, and the MapReduce implementation using a DHT is 10 to 30 times faster than the best previously studied algorithms. These are the fastest algorithms that easily scale to graphs with hundreds of billions of edges.) <|cite_end|>. The effectiveness of using this technique has been confirmed empirically by Mirzasoleiman et al who studied a similar algorithm on a subclass of submodular maximization problems. However, they only provide provable guarantees for a subclass of submodular functions satisfying a certain Lipchitz condition. Our result not only works for monotone submodular functions, but also {\em extends to non-monotone (non-negative) submodular} functions, and leads to the first constant-round MapReduce-based constant-factor approximation algorithm for non-monotone submodular maximization (with $O(n)$ total communication complexity and approximation factor of $0.18$). It also leads to the first constant-factor approximation algorithm for non-monotone submodular maximization in a random-order streaming model in one pass with sublinear memory.
Our next goal is to improve the approximation factor of the above algorithm for monotone submodular functions. To this end, we first observe that one cannot achieve a better than the $1/2$ factor via core-sets of size $k$ using algorithm $\greedy$ or any algorithm in a family of local search algorithms. In Section~\ref{sec:linear}, we show how to go beyond the $1/2$-approximation by applying core-sets of size higher than $k$ but still of size $O(k)$, and prove that algorithm $\greedy$ with a consistent tie-breaking rule provides a $0.585$-approximate randomized composable core-set of size $k'<4k$ for our problem. We then present algorithm $\pseudogreedy$ that can be applied as a post-processing step to design a distributed $0.545$-approximation algorithm in one or two rounds of MapReduces, and with linear total communication complexity. For monotone submodular maximization, this result implies the first distributed approximation algorithm with approximation factor better than $1/2$ that runs in a constant number of rounds. We achieve this approximation factor using one or two rounds of MapReduces and with the total communication complexity of $O(n)$. In addition, this result implies the first approximation algorithm beating the $1/2$ factor for the random-order streaming model with constant number of passes on the data and sublinear memory. To complement this result, we first show that our analysis for algorithm $\greedy$ is tight. Moreover, we show that it is information theoretically impossible to achieve an approximation factor better than $1-{1/e}$ using a core-set with size polynomial in $k$.
Finally, we consider the construction of {\em small-size} core-sets, i.e., a core-set of size $k'<k$. Studying such core-sets is important particularly for cases with large parameter $k$, e.g., $k= \Omega(n)$ or $k={n\over \log n}$~\footnote{For such large $k$, a core-set of size $k$ may not be as useful since outputting the whole core-set may be impossible. For example, in the formal MapReduce model <|cite_start|> (Reference: A model of computation for mapreduce: In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms for processing data on terabyte and petabyte scales. Used daily at companies such as Yahoo!, Google, Amazon, and Facebook, and adopted more recently by several universities, it allows for easy parallelization of data intensive computations over many machines. One key feature of MapReduce that differentiates it from previous models of parallel computation is that it interleaves sequential and parallel computation. We propose a model of efficient computation using the MapReduce paradigm. Since MapReduce is designed for computations over massive data sets, our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, we place very loose restrictions on the computational power of of any individual machine---our model allows each machine to perform sequential computations in time polynomial in the size of the original input.
We compare MapReduce to the PRAM model of computation. We prove a simulation lemma showing that a large class of PRAM algorithms can be efficiently simulated via MapReduce. The strength of MapReduce, however, lies in the fact that it uses both sequential and parallel computation. We demonstrate how algorithms can take advantage of this fact to compute an MST of a dense graph in only two rounds, as opposed to Ω(log(n)) rounds needed in the standard PRAM model. We show how to evaluate a wide class of functions using the MapReduce framework. We conclude by applying this result to show how to compute some basic algorithmic problems such as undirected s-t connectivity in the MapReduce framework.) <|cite_end|>, outputting a core-set of size $k$ for $k=\Omega(n)$ is not feasible.}.
For our problem, we first observe a hardness bound of $O({k'\over k})$ for non-randomized core-sets. On the other hand, in Subsection~\ref{subset:small_size_submod}, we show an $\Omega(\sqrt{k'\over k})$-approximate randomized composable core-set for this problem, and accompany this result by a matching hardness bound of $O(\sqrt{k'\over k})$ for randomized composable core-set. The hardness result is presented in Subsection~\ref{subsec:hardness}.
\subsection{Other Related Work.} \label{sec:relwork}
{\bf Submodular Maximization in Streaming and MapReduce:}
Solving max $k$-coverage and submodular maximization in a distributed manner have attracted a significant amount of research over the last few years <|cite_start|> (Reference: Max-cover in Map-Reduce: The NP-hard Max-k-cover problem requires selecting k sets from a collection so as to maximize the size of the union. This classic problem occurs commonly in many settings in web search and advertising. For moderately-sized instances, a greedy algorithm gives an approximation of (1-1/e). However, the greedy algorithm requires updating scores of arbitrary elements after each step, and hence becomes intractable for large datasets.
We give the first max cover algorithm designed for today's large-scale commodity clusters. Our algorithm has provably almost the same approximation as greedy, but runs much faster. Furthermore, it can be easily expressed in the MapReduce programming paradigm, and requires only polylogarithmically many passes over the data. Our experiments on five large problem instances show that our algorithm is practical and can achieve good speedups compared to the sequential greedy algorithm.) <|cite_end|> <|cite_start|> (Reference: Set cover algorithms for very large datasets: The problem of Set Cover - to find the smallest subcollection of sets that covers some universe - is at the heart of many data and analysis tasks. It arises in a wide range of settings, including operations research, machine learning, planning, data quality and data mining. Although finding an optimal solution is NP-hard, the greedy algorithm is widely used, and typically finds solutions that are close to optimal. However, a direct implementation of the greedy approach, which picks the set with the largest number of uncovered items at each step, does not behave well when the input is very large and disk resident. The greedy algorithm must make many random accesses to disk, which are unpredictable and costly in comparison to linear scans. In order to scale Set Cover to large datasets, we provide a new algorithm which finds a solution that is provably close to that of greedy, but which is much more efficient to implement using modern disk technology. Our experiments show a ten-fold improvement in speed on moderately-sized datasets, and an even greater improvement on larger datasets.) <|cite_end|> <|cite_start|> (Reference: Filtering: a method for solving graph problems in
{MapReduce: The MapReduce framework is currently the de facto standard used throughout both industry and academia for petabyte scale data analysis. As the input to a typical MapReduce computation is large, one of the key requirements of the framework is that the input cannot be stored on a single machine and must be processed in parallel. In this paper we describe a general algorithmic design technique in the MapReduce framework called filtering. The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Using this approach we give new algorithms in the MapReduce framework for a variety of fundamental graph problems for sufficiently dense graphs. Specifically, we present algorithms for minimum spanning trees, maximal matchings, approximate weighted matchings, approximate vertex and edge covers and minimum cuts. In all of these cases, we parameterize our algorithms by the amount of memory available on the machines allowing us to show tradeoffs between the memory available and the number of MapReduce rounds. For each setting we will show that even if the machines are only given substantially sublinear memory, our algorithms run in a constant number of MapReduce rounds. To demonstrate the practical viability of our algorithms we implement the maximal matching algorithm that lies at the core of our analysis and show that it achieves a significant speedup over the sequential version.) <|cite_end|> <|cite_start|> (Reference: Parallel and I/O efficient set covering algorithms: This paper presents the design, analysis, and implementation of parallel and sequential I/O-efficient algorithms for set cover, tying together the line of work on parallel set cover and the line of work on efficient set cover algorithms for large, disk-resident instances.
Our contributions are twofold: First, we design and analyze a parallel cache-oblivious set-cover algorithm that offers essentially the same approximation guarantees as the standard greedy algorithm, which has the optimal approximation. Our algorithm is the first efficient external-memory or cache-oblivious algorithm for when neither the sets nor the elements fit in memory, leading to I/O cost (cache complexity) equivalent to sorting in the Cache Oblivious or Parallel Cache Oblivious models. The algorithm also implies elow cache misses on parallel hierarchical memories (again, equivalent to sorting). Second, building on this theory, we engineer variants of the theoretical algorithm optimized for different hardware setups. We provide experimental evaluation showing substantial speedups over existing algorithms without compromising the solution's quality.) <|cite_end|> <|cite_start|> (Reference: Fast greedy algorithms in mapreduce and streaming: Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.) <|cite_end|> <|cite_start|> (Reference: Streaming submodular maximization: Massive data summarization on the fly: How can one summarize a massive data set "on the fly", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of "representativeness" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1/2-ε approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.) <|cite_end|>. From theoretical point of view, for the coverage maximization problem, Chierchetti et al. <|cite_start|> (Reference: Max-cover in Map-Reduce: The NP-hard Max-k-cover problem requires selecting k sets from a collection so as to maximize the size of the union. This classic problem occurs commonly in many settings in web search and advertising. For moderately-sized instances, a greedy algorithm gives an approximation of (1-1/e). However, the greedy algorithm requires updating scores of arbitrary elements after each step, and hence becomes intractable for large datasets.
We give the first max cover algorithm designed for today's large-scale commodity clusters. Our algorithm has provably almost the same approximation as greedy, but runs much faster. Furthermore, it can be easily expressed in the MapReduce programming paradigm, and requires only polylogarithmically many passes over the data. Our experiments on five large problem instances show that our algorithm is practical and can achieve good speedups compared to the sequential greedy algorithm.) <|cite_end|>present a $(1-1/e)$-approximation algorithm in polylogarithmic number of MapReduce rounds, and Belloch et al <|cite_start|> (Reference: Parallel and I/O efficient set covering algorithms: This paper presents the design, analysis, and implementation of parallel and sequential I/O-efficient algorithms for set cover, tying together the line of work on parallel set cover and the line of work on efficient set cover algorithms for large, disk-resident instances.
Our contributions are twofold: First, we design and analyze a parallel cache-oblivious set-cover algorithm that offers essentially the same approximation guarantees as the standard greedy algorithm, which has the optimal approximation. Our algorithm is the first efficient external-memory or cache-oblivious algorithm for when neither the sets nor the elements fit in memory, leading to I/O cost (cache complexity) equivalent to sorting in the Cache Oblivious or Parallel Cache Oblivious models. The algorithm also implies elow cache misses on parallel hierarchical memories (again, equivalent to sorting). Second, building on this theory, we engineer variants of the theoretical algorithm optimized for different hardware setups. We provide experimental evaluation showing substantial speedups over existing algorithms without compromising the solution's quality.) <|cite_end|>improved this result and achieved $\log^2 n$ number of rounds. Recently, Kumar et al. | [
"<|reference_start|> A model of computation for mapreduce: In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms for processing data on terabyte and petabyte scales. Used daily at companies such as Yahoo!, Google, Amazon, and Facebook, and adopted more recently by several universities, it allows for easy parallelization of data intensive computations over many machines. One key feature of MapReduce that differentiates it from previous models of parallel computation is that it interleaves sequential and parallel computation. We propose a model of efficient computation using the MapReduce paradigm. Since MapReduce is designed for computations over massive data sets, our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, we place very loose restrictions on the computational power of of any individual machine---our model allows each machine to perform sequential computations in time polynomial in the size of the original input.\n We compare MapReduce to the PRAM model of computation. We prove a simulation lemma showing that a large class of PRAM algorithms can be efficiently simulated via MapReduce. The strength of MapReduce, however, lies in the fact that it uses both sequential and parallel computation. We demonstrate how algorithms can take advantage of this fact to compute an MST of a dense graph in only two rounds, as opposed to Ω(log(n)) rounds needed in the standard PRAM model. We show how to evaluate a wide class of functions using the MapReduce framework. We conclude by applying this result to show how to compute some basic algorithmic problems such as undirected s-t connectivity in the MapReduce framework. <|reference_end|>",
"<|reference_start|> Set cover algorithms for very large datasets: The problem of Set Cover - to find the smallest subcollection of sets that covers some universe - is at the heart of many data and analysis tasks. It arises in a wide range of settings, including operations research, machine learning, planning, data quality and data mining. Although finding an optimal solution is NP-hard, the greedy algorithm is widely used, and typically finds solutions that are close to optimal. However, a direct implementation of the greedy approach, which picks the set with the largest number of uncovered items at each step, does not behave well when the input is very large and disk resident. The greedy algorithm must make many random accesses to disk, which are unpredictable and costly in comparison to linear scans. In order to scale Set Cover to large datasets, we provide a new algorithm which finds a solution that is provably close to that of greedy, but which is much more efficient to implement using modern disk technology. Our experiments show a ten-fold improvement in speed on moderately-sized datasets, and an even greater improvement on larger datasets. <|reference_end|>",
"<|reference_start|> Parallel and I/O efficient set covering algorithms: This paper presents the design, analysis, and implementation of parallel and sequential I/O-efficient algorithms for set cover, tying together the line of work on parallel set cover and the line of work on efficient set cover algorithms for large, disk-resident instances.\n Our contributions are twofold: First, we design and analyze a parallel cache-oblivious set-cover algorithm that offers essentially the same approximation guarantees as the standard greedy algorithm, which has the optimal approximation. Our algorithm is the first efficient external-memory or cache-oblivious algorithm for when neither the sets nor the elements fit in memory, leading to I/O cost (cache complexity) equivalent to sorting in the Cache Oblivious or Parallel Cache Oblivious models. The algorithm also implies elow cache misses on parallel hierarchical memories (again, equivalent to sorting). Second, building on this theory, we engineer variants of the theoretical algorithm optimized for different hardware setups. We provide experimental evaluation showing substantial speedups over existing algorithms without compromising the solution's quality. <|reference_end|>",
"<|reference_start|> Composable core-sets for Diversity and Coverage Maximization: In this paper we consider efficient construction of \"composable core-sets\" for basic diversity and coverage maximization problems. A core-set for a point-set in a metric space is a subset of the point-set with the property that an approximate solution to the whole point-set can be obtained given the core-set alone. A composable core-set has the property that for a collection of sets, the approximate solution to the union of the sets in the collection can be obtained given the union of the composable core-sets for the point sets in the collection. Using composable core-sets one can obtain efficient solutions to a wide variety of massive data processing applications, including nearest neighbor search, streaming algorithms and map-reduce computation. Our main results are algorithms for constructing composable core-sets for several notions of \"diversity objective functions\", a topic that attracted a significant amount of research over the last few years. The composable core-sets we construct are small and accurate: their approximation factor almost matches that of the best \"off-line\" algorithms for the relevant optimization problems (up to a constant factor). Moreover, we also show applications of our results to diverse nearest neighbor search, streaming algorithms and map-reduce computation. Finally, we show that for an alternative notion of diversity maximization based on the maximum coverage problem small composable core-sets do not exist. <|reference_end|>"
] | [
11,
23,
25,
27
] | {"<|cite_1|>": "ss-833166", "<|multi_cite_2_2|>": "ss-2537227", "<|multi_cite_2_3|>": "ss-833161", "<|multi_cite_2_4|>": "ss-1293835", "<|multi_cite_3_1|>": "ss-2537227", "<|multi_cite_3_2|>": "ss-833161", "<|cite_4|>": "ss-833161", "<|multi_cite_6_1|>": "ss-833161", "<|cite_7|>": "ss-833161", "<|cite_8|>": "ss-1107884", "<|cite_9|>": "ss-833161", "<|cite_10|>": "ss-1287026", "<|multi_cite_11_1|>": "ss-804244", "<|multi_cite_11_2|>": "ss-2275601", "<|cite_12|>": "ss-804244", "<|cite_13|>": "ss-2275601", "<|multi_cite_14_1|>": "ss-1287026", "<|multi_cite_14_2|>": "ss-1098170", "<|multi_cite_14_3|>": "ss-2537227", "<|multi_cite_14_4|>": "ss-833161", "<|multi_cite_14_5|>": "arxiv-54674", "<|cite_15|>": "ss-833161", "<|multi_cite_16_1|>": "ss-994189", "<|multi_cite_16_2|>": "ss-2537228", "<|multi_cite_16_3|>": "ss-1098170", "<|multi_cite_16_4|>": "ss-1214102", "<|multi_cite_16_5|>": "ss-1372929", "<|cite_17|>": "ss-833161", "<|cite_18|>": "ss-833161", "<|multi_cite_19_1|>": "ss-994189", "<|multi_cite_19_2|>": "ss-1214102", "<|cite_20|>": "ss-1372929", "<|cite_21|>": "ss-1376810", "<|cite_24|>": "ss-1287026", "<|multi_cite_25_1|>": "ss-994189", "<|multi_cite_25_2|>": "ss-2537228", "<|multi_cite_25_3|>": "ss-1098170", "<|multi_cite_25_4|>": "ss-1214102", "<|multi_cite_25_5|>": "ss-1372929", "<|multi_cite_25_7|>": "ss-1293835", "<|cite_26|>": "ss-994189", "<|cite_27|>": "ss-1214102", "<|cite_28|>": "ss-1372929", "<|cite_29|>": "ss-1376810", "<|cite_30|>": "ss-833161", "<|cite_34|>": "ss-1293835", "<|cite_35|>": "ss-2275601", "<|cite_36|>": "ss-833161", "<|cite_37|>": "ss-2275601", "<|cite_38|>": "ss-1102591", "<|multi_cite_39_1|>": "ss-804244", "<|multi_cite_39_2|>": "ss-2275601", "<|multi_cite_40_1|>": "ss-1287026", "<|multi_cite_40_2|>": "ss-1098170", "<|multi_cite_40_3|>": "ss-2537227", "<|multi_cite_40_4|>": "ss-833161", "<|cite_41|>": "ss-1516653", "<|multi_cite_42_1|>": "ss-901413", "<|multi_cite_42_2|>": "arxiv-34619", "<|multi_cite_42_3|>": "ss-804246"} |
1907.06837 | <|paper_start|> Title: A Self-Attentive model for Knowledge Tracing
Abstract: A Self-Attentive model for Knowledge Tracing: Knowledge tracing is the task of modeling each student's mastery of knowledge concepts (KCs) as (s)he engages with a sequence of learning activities. Each student's knowledge is modeled by estimating the performance of the student on the learning activities. It is an important research area for providing a personalized learning platform to students. In recent years, methods based on Recurrent Neural Networks (RNN) such as Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN) outperformed all the traditional methods because of their ability to capture complex representation of human learning. However, these methods face the issue of not generalizing well while dealing with sparse data which is the case with real-world data as students interact with few KCs. In order to address this issue, we develop an approach that identifies the KCs from the student's past activities that are \textit{relevant} to the given KC and predicts his/her mastery based on the relatively few KCs that it picked. Since predictions are made based on relatively few past activities, it handles the data sparsity problem better than the methods based on RNN. For identifying the relevance between the KCs, we propose a self-attention based approach, Self Attentive Knowledge Tracing (SAKT). Extensive experimentation on a variety of real-world dataset shows that our model outperforms the state-of-the-art models for knowledge tracing, improving AUC by 4.43% on average.
Introduction
The availability of massive dataset of students' learning trajectories about their \textit{knowledge concepts} (KCs), where a KC can be an exercise, a skill or a concept, has attracted data miners to develop tools for predicting students' performance and giving proper feedback <|cite_start|> (Reference: Theoretical foundations for intelligent tutoring systems: This paper considers the case for formalising aspects of intelligent tutoring systems in order to derive more reliable implementations, as opposed to the present use of informal theories to build experimental systems which are then studied empirically. Some recent work in theoretical AI is suggested as a possible source for the elements of a 'theory of ITS'. I n t r o d u c t i o n The engineering of any complex device (such as an ITS) gradually relies less on empirical experimentation and more on mathematical or scientific theory. As yet, there is no significant 'theory of ITS': all of the recent ITS texts (e.g. Wenger, 1987; Mandl and Lesgold, 1988; Polson and Richardson, 1988) are entirely discursive and attempt no kind of formalisation of their content. The aim of this paper is to suggest that it is not premature for ITS research to begin an attempt to complement a short-term emphasis on pragmatic aspects (Kearsley, 1989) by seeking theoretical foundations for its implementations. Most AI researchers regard ITSs as peripheral applications of AI, an understandable opinion in view of the virtual absence of ITS papers from the major AI journals and conferences. But Clancey (1986) has argued that work on ITSs is not a "mere matter of putting well-known AI methods into practice" but is (or should be) "broadening the meaning of AI research". Historically, ITS research began within AI, but AI researchers have retreated from the ITS arena as they have come to appreciate the need for more fundamental work on mental models, language understanding, knowledge representation, etc., leaving others to move into an intrinsically multi-disciplinary field. However, if there is ever to be a formal theory of (aspects of) ITS then it will be derived from elements of AI. Moreover, recent AI research begins to indicate what those elements might be.) <|cite_end|>. For developing such personalized learning platforms, knowledge tracing (KT) is considered to be an important task and is defined as the task of tracing a student's \textit{knowledge state}, which represents his/her mastery level of KCs, based on his/her past learning activities. The KT task can be formalized as a supervised sequence learning task - given student's past exercise interactions \( \mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_t) \), predict some aspect of his/her next interaction $\mathbf{x}_{t+1}$. On the question-answering platform, the interactions are represented as
$\mathbf{x}_t = (e_t, r_t)$, where \( e_t \) is the exercise that the student attempts at timestamp $t$ and $r_t$ is the correctness of the student's answer. KT aims to predict whether the student will be able to answer the next exercise correctly, i.e., predict \( p(r_{t+1}=1| e_{t+1}, \mathbf{X}) \).\par
Recently deep learning models such as Deep Knowledge Tracing (DKT) <|cite_start|> (Reference: Deep Knowledge Tracing: Knowledge tracing---where a machine models the knowledge of a student as they interact with coursework---is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.) <|cite_end|> and its variant <|cite_start|> (Reference: Addressing Two Problems in Deep Knowledge Tracing via Prediction-Consistent Regularization: Knowledge tracing is one of the key research areas for empowering personalized education. It is a task to model students' mastery level of a knowledge component (KC) based on their historical learning trajectories. In recent years, a recurrent neural network model called deep knowledge tracing (DKT) has been proposed to handle the knowledge tracing task and literature has shown that DKT generally outperforms traditional methods. However, through our extensive experimentation, we have noticed two major problems in the DKT model. The first problem is that the model fails to reconstruct the observed input. As a result, even when a student performs well on a KC, the prediction of that KC's mastery level decreases instead, and vice versa. Second, the predicted performance for KCs across time-steps is not consistent. This is undesirable and unreasonable because student's performance is expected to transit gradually over time. To address these problems, we introduce regularization terms that correspond to reconstruction and waviness to the loss function of the original DKT model to enhance the consistency in prediction. Experiments show that the regularized loss function effectively alleviates the two problems without degrading the original task of DKT.) <|cite_end|> used Recurrent Neural Network (RNN) to model a student's knowledge state in one summarized hidden vector.
Dynamic Key-value memory network (DKVMN) <|cite_start|> (Reference: Dynamic Key-Value Memory Networks for Knowledge Tracing: Knowledge Tracing (KT) is a task of tracing evolving knowledge state of students with respect to one or more concepts as they engage in a sequence of learning activities. One important purpose of KT is to personalize the practice sequence to help students learn knowledge concepts efficiently. However, existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing either model knowledge state for each predefined concept separately or fail to pinpoint exactly which concepts a student is good at or unfamiliar with. To solve these problems, this work introduces a new model called Dynamic Key-Value Memory Networks (DKVMN) that can exploit the relationships between underlying concepts and directly output a student's mastery level of each concept. Unlike standard memory-augmented neural networks that facilitate a single memory matrix or two static memory matrices, our model has one static matrix called key, which stores the knowledge concepts and the other dynamic matrix called value, which stores and updates the mastery levels of corresponding concepts. Experiments show that our model consistently outperforms the state-of-the-art model in a range of KT datasets. Moreover, the DKVMN model can automatically discover underlying concepts of exercises typically performed by human annotations and depict the changing knowledge state of a student.) <|cite_end|> exploited Memory Augmented Neural Network <|cite_start|> (Reference: One-shot Learning with Memory-Augmented Neural Networks: Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning." Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.) <|cite_end|> for KT. Using two matrices, \textit{key} and \textit{value}, it learns the correlation between the exercises and the underlying KC and student's knowledge state, respectively.
The DKT model faces the issue of its parameters being non-interpretable <|cite_start|> (Reference: How deep is knowledge tracing?: In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to predicting students' performance as they work through a series of exercises---termed deep knowledge tracing or DKT---has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT. In this article, we attempt to understand the basis for DKT's advantage by considering the sources of statistical regularity in the data that DKT can leverage but which BKT cannot. We hypothesize four forms of regularity that BKT fails to exploit: recency effects, the contextualized trial sequence, inter-skill similarity, and individual variation in ability. We demonstrate that when BKT is extended to allow it more flexibility in modeling statistical regularities---using extensions previously proposed in the literature---BKT achieves a level of performance indistinguishable from that of DKT. We argue that while DKT is a powerful, useful, general-purpose framework for modeling student learning, its gains do not come from the discovery of novel representations---the fundamental advantage of deep learning. To answer the question posed in our title, knowledge tracing may be a domain that does not require `depth'; shallow models like BKT can perform just as well and offer us greater interpretability and explanatory power.) <|cite_end|>.
DKVMN is more interpretable than DKT as it explicitly maintains a KC representation matrix (\textit{key}) and a knowledge state representation matrix (\textit{value}). However, since all these deep learning models are based on RNNs, they face the issue of not generalizing while dealing with sparse data <|cite_start|> (Reference: Self-Attentive Sequential Recommendation: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.) <|cite_end|>. \par
\begin{figure}
\includegraphics[keepaspectratio,width=0.5\textwidth]{probem1.png}
\caption{ Left subfigure shows the sequence of exercises that the student attempts and the right subfigure shows the knowledge concepts to which each of the exercises belong.}
\label{first}
\end{figure}
In this paper, we propose to use a purely attention mechanism based method, \textit{transformer} <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>.
In the KT task, the skills that a student builds while going through the sequence of learning activities, are related to each other and the performance on a particular exercise is dependent on his performance on the past exercises related to that exercise. For example, in figure~\ref{first}, for a student to solve an exercise on \enquote{Quadratic equation} (exercise 5) which belongs to the knowledge concept \enquote{Equations}, he needs to know how to find \enquote{square roots} (exercise 3) and \enquote{linear equations} (exercise 4). SAKT, proposed in this paper first identifies \textit{relevant} KCs from the past interactions and then predicts student's performance based on his/her performance on those KCs. For predicting student's performance on an exercise, we used exercises as KCs. As we show later, SAKT assigns weights to the previously answered exercises, while predicting the performance of the student on a particular exercise.
The proposed SAKT method significantly outperforms the state-of-the-art KT methods gaining a performance improvement of $4.43\%$ on the AUC, on an average across all datasets. Furthermore, the main component (self-attention) of SAKT is suitable for parallelism; thus, making our model order of magnitude faster than RNN based models. \par
\begin{figure}[!ht]
\subfloat[Network of SAKT. At each timestamp the attention weights are estimated for each of the previous element only. Keys, Values and Queries are extracted from the embedding layer shown below. When $j$th element is query and $i$th element is key, attention weight is $a_{i,j}$. \label{subfig-1}]{
\includegraphics[width=0.45\textwidth]{architecture.png}
}
\subfloat[ Embedding layer embeds the current exercise that the student is attempting and his past interactions. At every time stamp $t+1$, the current question $e_{t+1}$ is embedded in the query space using Exercise embedding and elements of past interactions $\textbf{x}_t$ is embedded in the key and value space using the Interaction embedding.
\label{subfig-2}]{
\includegraphics[width=0.45\textwidth]{architecture2.png}
}
\caption{Diagram showing the architecture of SAKT.}
\label{fig:dummy}
\end{figure}
\def\hat{\mathaccent "705E\relax}
\begin{table}[]
\caption{Notations}
\label{notations}
\begin{tabular}{ll}
\toprule
Notations & Description\\
\hline
$N$ & total number of students \\
$E$ & total number of exercises \\
$\textbf{X}$ & Interaction sequence of a student: $(x_1, x_2, \ldots, x_t)$ \\
$x_i $ & $i$th exercise-answer pair of a student \\
$n$ & maximum length of sequence \\
$d$ & latent vector dimensionality \\
$\textbf{e} $ & Sequence of exercises solved by the student \\
$\textbf{M} $ & Interaction embedding matrix \\
$\textbf{P} $ & Positional embedding matrix \\
$\textbf{E} $ & Exercise lookup matrix \\
$\hat{\textbf{M}}$ & Past interactions embedding \\
$\hat{\textbf{E}}$ & Exercise embedding \\
\bottomrule
\end{tabular}
\end{table} <|paper_end|> | [
"<|reference_start|> Dynamic Key-Value Memory Networks for Knowledge Tracing: Knowledge Tracing (KT) is a task of tracing evolving knowledge state of students with respect to one or more concepts as they engage in a sequence of learning activities. One important purpose of KT is to personalize the practice sequence to help students learn knowledge concepts efficiently. However, existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing either model knowledge state for each predefined concept separately or fail to pinpoint exactly which concepts a student is good at or unfamiliar with. To solve these problems, this work introduces a new model called Dynamic Key-Value Memory Networks (DKVMN) that can exploit the relationships between underlying concepts and directly output a student's mastery level of each concept. Unlike standard memory-augmented neural networks that facilitate a single memory matrix or two static memory matrices, our model has one static matrix called key, which stores the knowledge concepts and the other dynamic matrix called value, which stores and updates the mastery levels of corresponding concepts. Experiments show that our model consistently outperforms the state-of-the-art model in a range of KT datasets. Moreover, the DKVMN model can automatically discover underlying concepts of exercises typically performed by human annotations and depict the changing knowledge state of a student. <|reference_end|>",
"<|reference_start|> One-shot Learning with Memory-Augmented Neural Networks: Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms. <|reference_end|>",
"<|reference_start|> Self-Attentive Sequential Recommendation: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences. <|reference_end|>",
"<|reference_start|> Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. <|reference_end|>"
] | [
3,
4,
6,
7
] | {"<|cite_1|>": "ss-2302531", "<|cite_2|>": "arxiv-79690", "<|cite_3|>": "arxiv-161477", "<|cite_4|>": "arxiv-110935", "<|cite_5|>": "arxiv-98263", "<|cite_6|>": "arxiv-95548", "<|cite_7|>": "arxiv-170660", "<|cite_8|>": "arxiv-126595"} |
2303.00609-0 | <|paper_start|> Title: Unsupervised Pathology Detection: A Deep Dive Into the State of the Art
Abstract: Unsupervised Pathology Detection: A Deep Dive Into the State of the Art: Deep unsupervised approaches are gathering increased attention for applications such as pathology detection and segmentation in medical images since they promise to alleviate the need for large labeled datasets and are more generalizable than their supervised counterparts in detecting any kind of rare pathology. As the Unsupervised Anomaly Detection (UAD) literature continuously grows and new paradigms emerge, it is vital to continuously evaluate and benchmark new methods in a common framework, in order to reassess the state-of-the-art (SOTA) and identify promising research directions. To this end, we evaluate a diverse selection of cutting-edge UAD methods on multiple medical datasets, comparing them against the established SOTA in UAD for brain MRI. Our experiments demonstrate that newly developed feature-modeling methods from the industrial and medical literature achieve increased performance compared to previous work and set the new SOTA in a variety of modalities and datasets. Additionally, we show that such methods are capable of benefiting from recently developed self-supervised pre-training algorithms, further increasing their performance. Finally, we perform a series of experiments in order to gain further insights into some unique characteristics of selected models and datasets. Our code can be found under https://github.com/iolag/UPD_study/.
Introduction
\label{sec:introduction}
\IEEEPARstart{F}{rom} routine check-ups to the detection and treatment of brain tumors, pathology detection from medical images is an indispensable part of the clinical diagnosis and treatment workflow. While generally performed manually by clinical experts (e.g. radiologists), an ever-increasing demand for radiological assessments in modern healthcare systems has prompted researchers to focus on developing algorithmic solutions that can assist in clinical diagnosis.
Anomaly Detection (AD) can be described as an outlier detection problem, where the aim is to discriminate between in- and out-of-distribution samples of a normative distribution.
In the context of Pathology Detection (PD) for medical diagnosis, the \say{normal} distribution constitutes healthy samples and cases containing pathologies can be detected as outliers.
While PD can be tackled with supervised learning strategies, the main concerns here are twofold.
Firstly, they require vast amounts of image-level labels or pixel-level segmentations from medical experts, but these are scarce and costly to obtain, posing practical limitations for training such models.
Additionally, the morphological variability of pathologies is large and rare anomalies are likely underrepresented (or not included at all) in the datasets, posing problems for supervised methods.
On the contrary, Unsupervised Pathology Detection (UPD) methods leverage exclusively healthy samples during training in order to model the normal anatomy distribution. Given that a significant part of acquired medical images, i.e. from routine or preventive check-ups, is clinically unremarkable, the unsupervised setting theoretically reveals a large amount of data to train UPD models.
Therefore, there has been a recent surge in the development of UPD methods <|cite_start|> (Reference: Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images: ) <|cite_end|> <|cite_start|> (Reference: Autoencoders for Unsupervised Anomaly Segmentation in Brain MR Images: A Comparative Study: Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data---a necessity for and pitfall of current supervised Deep Learning---and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.) <|cite_end|> <|cite_start|> (Reference: f‐AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Anomaly Localization using Variational Auto-Encoders: An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. In principle, this allows for such a check and even the localization of parts in the image that are most suspicious. Currently, however, the reconstruction-based localization by design requires adjusting the model architecture to the specific problem looked at during evaluation. This contradicts the principle of building assumption-free models. We propose complementing the localization part with a term derived from the Kullback-Leibler (KL)-divergence. For validation, we perform a series of experiments on FashionMNIST as well as on a medical task including >1000 healthy and >250 brain tumor patients. Results show that the proposed formalism outperforms the state of the art VAE-based localization of anomalies across many hyperparameter settings and also shows a competitive max performance.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Lesion Detection via Image Restoration with a Normative Prior: Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.) <|cite_end|> <|cite_start|> (Reference: Unsupervised brain lesion segmentation from MRI using a convolutional autoencoder: Lesions that appear hyperintense in both Fluid Attenuated Inversion Recovery (FLAIR) and T2-weighted magnetic resonance images (MRIs) of the human brain are common in the brains of the elderly population and may be caused by ischemia or demyelination. Lesions are biomarkers for various neurodegenerative diseases, making accurate quantification of them important for both disease diagnosis and progression. Automatic lesion detection using supervised learning requires manually annotated images, which can often be impractical to acquire. Unsupervised lesion detection, on the other hand, does not require any manual delineation; however, these methods can be challenging to construct due to the variability in lesion load, placement of lesions, and voxel intensities. Here we present a novel approach to address this problem using a convolutional autoencoder, which learns to segment brain lesions as well as the white matter, gray matter, and cerebrospinal fluid by reconstructing FLAIR images as conical combinations of softmax layer outputs generated from the corresponding T1, T2, and FLAIR images. Some of the advantages of this model are that it accurately learns to segment lesions regardless of lesion load, and it can be used to quickly and robustly segment new images that were not in the training set. Comparisons with state-of-the-art segmentation methods evaluated on ground truth manual labels indicate that the proposed method works well for generating accurate lesion segmentations without the need for manual annotations.) <|cite_end|>.
Unfortunately, the lack of publicly available benchmark datasets for medical UPD forces practitioners to develop and evaluate their models on various private and public datasets, hindering comparability and the development of best practices in this field. The problem is further exacerbated by the concurrent development of UAD methods in other areas such as industrial inspection <|cite_start|> (Reference: MVTec AD — a comprehensive real-world dataset for unsupervised anomaly detection: The detection of anomalous structures in natural image data is of utmost importance for numerous tasks in the field of computer vision. The development of methods for unsupervised anomaly detection requires data on which to train and evaluate new approaches and ideas. We introduce the MVTec Anomaly Detection (MVTec AD) dataset containing 5354 high-resolution color images of different object and texture categories. It contains normal, i.e., defect-free, images intended for training and images with anomalies intended for testing. The anomalies manifest themselves in the form of over 70 different types of defects such as scratches, dents, contaminations, and various structural changes. In addition, we provide pixel-precise ground truth regions for all anomalies. We also conduct a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods based on deep architectures such as convolutional autoencoders, generative adversarial networks, and feature descriptors using pre-trained convolutional neural networks, as well as classical computer vision methods. This initial benchmark indicates that there is considerable room for improvement. To the best of our knowledge, this is the first comprehensive, multi-object, multi-defect dataset for anomaly detection that provides pixel-accurate ground truth regions and focuses on real-world applications.) <|cite_end|>.
To this end, the following work presents a thorough comparison of the most common current UAD paradigms, applied to detecting pathologies in medical images.
Specifically, we evaluate the anomaly detection and localization performance of 13 UPD methods on 4 different medical datasets with different characteristics.
To the best of our knowledge, this is the most comprehensive study so far, covering all important paradigms on a representative set of modalities.
Related Work
In this section, we briefly describe recent advances in Deep Unsupervised Anomaly Detection and Localization. We build our categorization upon the work of Jie \textit{et al.} <|cite_start|> (Reference: Visual Anomaly Detection for Images: A Survey: Visual anomaly detection is an important and challenging problem in the field of machine learning and computer vision. This problem has attracted a considerable amount of attention in relevant research communities. Especially in recent years, the development of deep learning has sparked an increasing interest in the visual anomaly detection problem and brought a great variety of novel methods. In this paper, we provide a comprehensive survey of the classical and deep learning-based approaches for visual anomaly detection in the literature. We group the relevant approaches in view of their underlying principles and discuss their assumptions, advantages, and disadvantages carefully. We aim to help the researchers to understand the common principles of visual anomaly detection approaches and identify promising research directions in this field.) <|cite_end|>and identify four groups: \textit{image-reconstruction} <|cite_start|> (Reference: Autoencoders for Unsupervised Anomaly Segmentation in Brain MR Images: A Comparative Study: Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data---a necessity for and pitfall of current supervised Deep Learning---and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.) <|cite_end|> <|cite_start|> (Reference: Unsupervised brain lesion segmentation from MRI using a convolutional autoencoder: Lesions that appear hyperintense in both Fluid Attenuated Inversion Recovery (FLAIR) and T2-weighted magnetic resonance images (MRIs) of the human brain are common in the brains of the elderly population and may be caused by ischemia or demyelination. Lesions are biomarkers for various neurodegenerative diseases, making accurate quantification of them important for both disease diagnosis and progression. Automatic lesion detection using supervised learning requires manually annotated images, which can often be impractical to acquire. Unsupervised lesion detection, on the other hand, does not require any manual delineation; however, these methods can be challenging to construct due to the variability in lesion load, placement of lesions, and voxel intensities. Here we present a novel approach to address this problem using a convolutional autoencoder, which learns to segment brain lesions as well as the white matter, gray matter, and cerebrospinal fluid by reconstructing FLAIR images as conical combinations of softmax layer outputs generated from the corresponding T1, T2, and FLAIR images. Some of the advantages of this model are that it accurately learns to segment lesions regardless of lesion load, and it can be used to quickly and robustly segment new images that were not in the training set. Comparisons with state-of-the-art segmentation methods evaluated on ground truth manual labels indicate that the proposed method works well for generating accurate lesion segmentations without the need for manual annotations.) <|cite_end|> <|cite_start|> (Reference: f‐AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Anomaly Localization using Variational Auto-Encoders: An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. In principle, this allows for such a check and even the localization of parts in the image that are most suspicious. Currently, however, the reconstruction-based localization by design requires adjusting the model architecture to the specific problem looked at during evaluation. This contradicts the principle of building assumption-free models. We propose complementing the localization part with a term derived from the Kullback-Leibler (KL)-divergence. For validation, we perform a series of experiments on FashionMNIST as well as on a medical task including >1000 healthy and >250 brain tumor patients. Results show that the proposed formalism outperforms the state of the art VAE-based localization of anomalies across many hyperparameter settings and also shows a competitive max performance.) <|cite_end|> <|cite_start|> (Reference: Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Lesion Detection via Image Restoration with a Normative Prior: Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Detection of Lesions in Brain MRI using constrained adversarial auto-encoders: Lesion detection in brain Magnetic Resonance Images (MRI) remains a challenging task. State-of-the-art approaches are mostly based on supervised learning making use of large annotated datasets. Human beings, on the other hand, even non-experts, can detect most abnormal lesions after seeing a handful of healthy brain images. Replicating this capability of using prior information on the appearance of healthy brain structure to detect lesions can help computers achieve human level abnormality detection, specifically reducing the need for numerous labeled examples and bettering generalization of previously unseen lesions. To this end, we study detection of lesion regions in an unsupervised manner by learning data distribution of brain MRI of healthy subjects using auto-encoder based methods. We hypothesize that one of the main limitations of the current models is the lack of consistency in latent representation. We propose a simple yet effective constraint that helps mapping of an image bearing lesion close to its corresponding healthy image in the latent space. We use the Human Connectome Project dataset to learn distribution of healthy-appearing brain MRI and report improved detection, in terms of AUC, of the lesions in the BRATS challenge dataset.) <|cite_end|>, \textit{feature-modeling} <|cite_start|> (Reference: DFR: Deep Feature Reconstruction for Unsupervised Anomaly Segmentation: Automatic detecting anomalous regions in images of objects or textures without priors of the anomalies is challenging, especially when the anomalies appear in very small areas of the images, making difficult-to-detect visual variations, such as defects on manufacturing products. This paper proposes an effective unsupervised anomaly segmentation approach that can detect and segment out the anomalies in small and confined regions of images. Concretely, we develop a multi-scale regional feature generator that can generate multiple spatial context-aware representations from pre-trained deep convolutional networks for every subregion of an image. The regional representations not only describe the local characteristics of corresponding regions but also encode their multiple spatial context information, making them discriminative and very beneficial for anomaly detection. Leveraging these descriptive regional features, we then design a deep yet efficient convolutional autoencoder and detect anomalous regions within images via fast feature reconstruction. Our method is simple yet effective and efficient. It advances the state-of-the-art performances on several benchmark datasets and shows great potential for real applications.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Anomaly Localization with Structural Feature-Autoencoders: Unsupervised Anomaly Detection has become a popular method to detect pathologies in medical images as it does not require supervision or labels for training. Most commonly, the anomaly detection model generates a "normal" version of an input image, and the pixel-wise $l^p$-difference of the two is used to localize anomalies. However, large residuals often occur due to imperfect reconstruction of the complex anatomical structures present in most medical images. This method also fails to detect anomalies that are not characterized by large intensity differences to the surrounding tissue. We propose to tackle this problem using a feature-mapping function that transforms the input intensity images into a space with multiple channels where anomalies can be detected along different discriminative feature maps extracted from the original image. We then train an Autoencoder model in this space using structural similarity loss that does not only consider differences in intensity but also in contrast and structure. Our method significantly increases performance on two medical data sets for brain MRI. Code and experiments are available at https://github.com/FeliMe/feature-autoencoder) <|cite_end|> <|cite_start|> (Reference: Student-Teacher Feature Pyramid Matching for Anomaly Detection: Anomaly detection is a challenging task and usually formulated as an one-class learning problem for the unexpectedness of anomalies. This paper proposes a simple yet powerful approach to this issue, which is implemented in the student-teacher framework for its advantages but substantially extends it in terms of both accuracy and efficiency. Given a strong model pre-trained on image classification as the teacher, we distill the knowledge into a single student network with the identical architecture to learn the distribution of anomaly-free images and this one-step transfer preserves the crucial clues as much as possible. Moreover, we integrate the multi-scale feature matching strategy into the framework, and this hierarchical feature matching enables the student network to receive a mixture of multi-level knowledge from the feature pyramid under better supervision, thus allowing to detect anomalies of various sizes. The difference between feature pyramids generated by the two networks serves as a scoring function indicating the probability of anomaly occurring. Due to such operations, our approach achieves accurate and fast pixel-level anomaly detection. Very competitive results are delivered on the MVTec anomaly detection dataset, superior to the state of the art ones.) <|cite_end|> <|cite_start|> (Reference: Anomaly Detection via Reverse Distillation from One-Class Embedding: Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD).The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD. However, using similar or identical architectures to build the teacher and student models in previous studies hinders the diversity of anomalous representations. To tackle this problem, we propose a novel T-S model consisting of a teacher encoder and a student decoder and introduce a simple yet effective "reverse distillation" paradigm accordingly. Instead of receiving raw images directly, the student network takes teacher model's one-class embedding as input and targets to restore the teacher's multiscale representations. Inherently, knowledge distillation in this study starts from abstract, high-level presentations to low-level features. In addition, we introduce a trainable one-class bottleneck embedding (OCBE) module in our T-S model. The obtained compact embedding effectively preserves essential information on normal patterns, but abandons anomaly perturbations. Extensive experimentation on AD and one-class novelty detection benchmarks shows that our method surpasses SOTA performance, demonstrating our proposed approach's effectiveness and generalizability.) <|cite_end|> <|cite_start|> (Reference: Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings: We introduce a powerful student-teacher framework for the challenging problem of unsupervised anomaly detection and pixel-precise anomaly segmentation in high-resolution images. Student networks are trained to regress the output of a descriptive teacher network that was pretrained on a large dataset of patches from natural images. This circumvents the need for prior data annotation. Anomalies are detected when the outputs of the student networks differ from that of the teacher network. This happens when they fail to generalize outside the manifold of anomaly-free training data. The intrinsic uncertainty in the student networks is used as an additional scoring function that indicates anomalies. We compare our method to a large number of existing deep learning based methods for unsupervised anomaly detection. Our experiments demonstrate improvements over state-of-the-art methods on a number of real-world datasets, including the recently introduced MVTec Anomaly Detection dataset that was specifically designed to benchmark anomaly segmentation algorithms.) <|cite_end|> <|cite_start|> (Reference: CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows: Unsupervised anomaly detection with localization has many practical applications when labeling is infeasible and, moreover, when anomaly examples are completely missing in the train data. While recently proposed models for such data setup achieve high accuracy metrics, their complexity is a limiting factor for real-time processing. In this paper, we propose a real-time model and analytically derive its relationship to prior methods. Our CFLOW-AD model is based on a conditional normalizing flow frame- work adopted for anomaly detection with localization. In particular, CFLOW-AD consists of a discriminatively pretrained encoder followed by a multi-scale generative de- coders where the latter explicitly estimate likelihood of the encoded features. Our approach results in a computationally and memory-efficient model: CFLOW-AD is faster and smaller by a factor of 10× than prior state-of-the-art with the same input setting. Our experiments on the MVTec dataset show that CFLOW-AD outperforms previous methods by 0.36% AUROC in detection task, by 1.12% AUROC and 2.5% AUPRO in localization task, respectively. We open-source our code with fully reproducible experiments1.) <|cite_end|> <|cite_start|> (Reference: FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows: Unsupervised anomaly detection and localization is crucial to the practical application when collecting and labeling sufficient anomaly data is infeasible. Most existing representation-based approaches extract normal image features with a deep convolutional neural network and characterize the corresponding distribution through non-parametric distribution estimation methods. The anomaly score is calculated by measuring the distance between the feature of the test image and the estimated distribution. However, current methods can not effectively map image features to a tractable base distribution and ignore the relationship between local and global features which are important to identify anomalies. To this end, we propose FastFlow implemented with 2D normalizing flows and use it as the probability distribution estimator. Our FastFlow can be used as a plug-in module with arbitrary deep feature extractors such as ResNet and vision transformer for unsupervised anomaly detection and localization. In training phase, FastFlow learns to transform the input visual feature into a tractable distribution and obtains the likelihood to recognize anomalies in inference phase. Extensive experimental results on the MVTec AD dataset show that FastFlow surpasses previous state-of-the-art methods in terms of accuracy and inference efficiency with various backbone networks. Our approach achieves 99.4% AUC in anomaly detection with high inference efficiency.) <|cite_end|> <|cite_start|> (Reference: Modeling the Distribution of Normal Data in Pre-Trained Deep Features for Anomaly Detection: Anomaly Detection (AD) in images is a fundamental computer vision problem and refers to identifying images and image substructures that deviate significantly from the norm. Popular AD algorithms commonly try to learn a model of normality from scratch using task specific datasets, but are limited to semi-supervised approaches employing mostly normal data due to the inaccessibility of anomalies on a large scale combined with the ambiguous nature of anomaly appearance. We follow an alternative approach and demonstrate that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality and detect even subtle anomalies in a transfer learning setting. Our model of normality is established by fitting a multivariate Gaussian (MVG) to deep feature representations of classification networks trained on ImageNet using normal data only. By subsequently applying the Mahalanobis distance as the anomaly score we outperform the current state of the art on the public MVTec AD dataset, achieving an AUROC value of $95.8 \pm 1.2$ (mean $\pm$ SEM) over all 15 classes. We further investigate why the learned representations are discriminative to the AD task using Principal Component Analysis. We find that the principal components containing little variance in normal data are the ones crucial for discriminating between normal and anomalous instances. This gives a possible explanation to the often sub-par performance of AD approaches trained from scratch using normal data only. By selectively fitting a MVG to these most relevant components only, we are able to further reduce model complexity while retaining AD performance. We also investigate setting the working point by selecting acceptable False Positive Rate thresholds based on the MVG assumption. Code available at https://github.com/ORippler/gaussian-ad-mvtec) <|cite_end|> <|cite_start|> (Reference: PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization: We present a new framework for Patch Distribution Modeling, PaDiM, to concurrently detect and localize anomalies in images in a one-class learning setting. PaDiM makes use of a pretrained convolutional neural network (CNN) for patch embedding, and of multivariate Gaussian distributions to get a probabilistic representation of the normal class. It also exploits correlations between the different semantic levels of CNN to better localize anomalies. PaDiM outperforms current state-of-the-art approaches for both anomaly detection and localization on the MVTec AD and STC datasets. To match real-world visual industrial inspection, we extend the evaluation protocol to assess performance of anomaly localization algorithms on non-aligned dataset. The state-of-the-art performance and low complexity of PaDiM make it a good candidate for many industrial applications.) <|cite_end|> <|cite_start|> (Reference: PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation: Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pretrained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pretrained features with simple anomaly detection and segmentation methods convincingly outperforms, much more complex, state-of-the-art methods. In order to obtain further performance gains in anomaly detection, we adapt pretrained features to the target distribution. Although transfer learning methods are well established in multi-class classification problems, the one-class classification (OCC) setting is not as well explored. It turns out that naive adaptation methods, which typically work well in supervised learning, often result in catastrophic collapse (feature deterioration) and reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates using specialized architectures, but this limits the adaptation performance gain. We propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. Our method, PANDA, outperforms the state-of-the-art in the OCC, outlier exposure and anomaly segmentation settings by large margins.) <|cite_end|> <|cite_start|> (Reference: Sub-Image Anomaly Detection with Deep Pyramid Correspondences: Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images. A limitation of kNN methods is the lack of segmentation map describing where the anomaly lies inside the image. In this work we present a novel anomaly segmentation approach based on alignment between an anomalous image and a constant number of the similar normal images. Our method, Semantic Pyramid Anomaly Detection (SPADE) uses correspondences based on a multi-resolution feature pyramid. SPADE is shown to achieve state-of-the-art performance on unsupervised anomaly detection and localization while requiring virtually no training time.) <|cite_end|>, \textit{attention-based} <|cite_start|> (Reference: Attention Guided Anomaly Localization in Images: Anomaly localization is an important problem in computer vision which involves localizing anomalous regions within images with applications in industrial inspection, surveillance, and medical imaging. This task is challenging due to the small sample size and pixel coverage of the anomaly in real-world scenarios. Most prior works need to use anomalous training images to compute a class-specific threshold to localize anomalies. Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information. In the unsupervised setting, we propose an attention expansion loss where we encourage CAVGA to focus on all normal regions in the image. Furthermore, in the weakly-supervised setting we propose a complementary guided attention loss, where we encourage the attention map to focus on all normal regions while minimizing the attention map corresponding to anomalous regions in the image. CAVGA outperforms the state-of-the-art (SOTA) anomaly localization methods on MVTec Anomaly Detection (MVTAD), modified ShanghaiTech Campus (mSTC) and Large-scale Attention based Glaucoma (LAG) datasets in the unsupervised setting and when using only 2% anomalous images in the weakly-supervised setting. CAVGA also outperforms SOTA anomaly detection methods on the MNIST, CIFAR-10, Fashion-MNIST, MVTAD, mSTC and LAG datasets.) <|cite_end|> <|cite_start|> (Reference: Towards Visually Explaining Variational Autoencoders: Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g. variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.) <|cite_end|> <|cite_start|> (Reference: Constrained unsupervised anomaly segmentation: Current unsupervised anomaly localization approaches rely on generative models to learn the distribution of normal images, which is later used to identify potential anomalous regions derived from errors on the reconstructed images. However, a main limitation of nearly all prior literature is the need of employing anomalous images to set a class-specific threshold to locate the anomalies. This limits their usability in realistic scenarios, where only normal data is typically accessible. Despite this major drawback, only a handful of works have addressed this limitation, by integrating supervision on attention maps during training. In this work, we propose a novel formulation that does not require accessing images with abnormalities to define the threshold. Furthermore, and in contrast to very recent work, the proposed constraint is formulated in a more principled manner, leveraging well-known knowledge in constrained optimization. In particular, the equality constraint on the attention maps in prior work is replaced by an inequality constraint, which allows more flexibility. In addition, to address the limitations of penalty-based functions we employ an extension of the popular log-barrier methods to handle the constraint. Last, we propose an alternative regularization term that maximizes the Shannon entropy of the attention maps, reducing the amount of hyperparameters of the proposed model. Comprehensive experiments on two publicly available datasets on brain lesion segmentation demonstrate that the proposed approach substantially outperforms relevant literature, establishing new state-of-the-art results for unsupervised lesion segmentation, and without the need to access anomalous images.) <|cite_end|>, and \textit{self-supervised anomaly detection} <|cite_start|> (Reference: Detecting Outliers with Poisson Image Interpolation: Supervised learning of every possible pathology is unrealistic for many primary care applications like health screening. Image anomaly detection methods that learn normal appearance from only healthy data have shown promising results recently. We propose an alternative to image reconstruction-based and image embedding-based methods and propose a new self-supervised method to tackle pathological anomaly detection. Our approach originates in the foreign patch interpolation (FPI) strategy that has shown superior performance on brain MRI and abdominal CT data. We propose to use a better patch interpolation strategy, Poisson image interpolation (PII), which makes our method suitable for applications in challenging data regimes. PII outperforms state-of-the-art methods by a good margin when tested on surrogate tasks like identifying common lung anomalies in chest X-rays or hypo-plastic left heart syndrome in prenatal, fetal cardiac ultrasound images. Code available at https://github.com/jemtan/PII.) <|cite_end|> <|cite_start|> (Reference: CutPaste: Self-Supervised Learning for Anomaly Detection and Localization: We aim at constructing a high performance model for defect detection that detects unknown anomalous patterns of an image without anomalous data. To this end, we propose a two-stage framework for building anomaly detectors using normal training data only. We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations. We learn representations by classifying normal data from the CutPaste, a simple data augmentation strategy that cuts an image patch and pastes at a random location of a large image. Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects. We bring the improvement upon previous arts by 3.1 AUCs when learning representations from scratch. By transfer learning on pretrained representations on ImageNet, we achieve a new state-of-theart 96.6 AUC. Lastly, we extend the framework to learn and extract representations from patches to allow localizing defective areas without annotations during training.) <|cite_end|> <|cite_start|> (Reference: {Denoising autoencoders for unsupervised anomaly detection in brain MRI: Pathological brain lesions exhibit diverse appearance in brain images, making it difficult to train supervised detection solutions due to the lack of comprehensive data and annotations. Thus, in this work we tackle unsupervised anomaly detection, using only healthy data for training with the aim of detecting unseen anomalies at test time. Many current approaches employ autoencoders with restrictive architectures (i.e. containing information bottlenecks) that tend to give poor reconstructions of not only the anomalous but also the normal parts of the brain. Instead, we investigate classical denoising autoencoder models that do not require bottlenecks and can employ skip connections to give high resolution fidelity. We design a simple noise generation method of upscaling low-resolution noise that enables high-quality reconstructions. We find that with appropriate noise generation, denoising autoencoder reconstruction errors generalize to hyperintense lesion segmentation and reach state of the art performance for unsupervised tumor detection in brain MRI data, beating more complex methods such as variational autoencoders. We believe this provides a strong and easy-to-implement baseline for further research into unsupervised anomaly detection.) <|cite_end|> <|cite_start|> (Reference: Patch SVDD: Patch-level SVDD for Anomaly Detection and Segmentation: In this paper, we address the problem of image anomaly detection and segmentation. Anomaly detection involves making a binary decision as to whether an input image contains an anomaly, and anomaly segmentation aims to locate the anomaly on the pixel level. Support vector data description (SVDD) is a long-standing algorithm used for an anomaly detection, and we extend its deep learning variant to the patch-based method using self-supervised learning. This extension enables anomaly segmentation and improves detection performance. As a result, anomaly detection and segmentation performances measured in AUROC on MVTec AD dataset increased by 9.8% and 7.0%, respectively, compared to the previous state-of-the-art methods. Our results indicate the efficacy of the proposed method and its potential for industrial application. Detailed analysis of the proposed method offers insights regarding its behavior, and the code is available online.) <|cite_end|> <|cite_start|> (Reference: AutoSeg - Steering the Inductive Biases for Automatic Pathology Segmentation: ) <|cite_end|>methods.
\subsection{Image-reconstruction Methods}
The prevailing \textit{modus operandi} in UPD uses image-reconstruction models to model normality and detect deviations from it.
Vanilla <|cite_start|> (Reference: Unsupervised brain lesion segmentation from MRI using a convolutional autoencoder: Lesions that appear hyperintense in both Fluid Attenuated Inversion Recovery (FLAIR) and T2-weighted magnetic resonance images (MRIs) of the human brain are common in the brains of the elderly population and may be caused by ischemia or demyelination. Lesions are biomarkers for various neurodegenerative diseases, making accurate quantification of them important for both disease diagnosis and progression. Automatic lesion detection using supervised learning requires manually annotated images, which can often be impractical to acquire. Unsupervised lesion detection, on the other hand, does not require any manual delineation; however, these methods can be challenging to construct due to the variability in lesion load, placement of lesions, and voxel intensities. Here we present a novel approach to address this problem using a convolutional autoencoder, which learns to segment brain lesions as well as the white matter, gray matter, and cerebrospinal fluid by reconstructing FLAIR images as conical combinations of softmax layer outputs generated from the corresponding T1, T2, and FLAIR images. Some of the advantages of this model are that it accurately learns to segment lesions regardless of lesion load, and it can be used to quickly and robustly segment new images that were not in the training set. Comparisons with state-of-the-art segmentation methods evaluated on ground truth manual labels indicate that the proposed method works well for generating accurate lesion segmentations without the need for manual annotations.) <|cite_end|>or Variational Autoencoders <|cite_start|> (Reference: Unsupervised Anomaly Localization using Variational Auto-Encoders: An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. In principle, this allows for such a check and even the localization of parts in the image that are most suspicious. Currently, however, the reconstruction-based localization by design requires adjusting the model architecture to the specific problem looked at during evaluation. This contradicts the principle of building assumption-free models. We propose complementing the localization part with a term derived from the Kullback-Leibler (KL)-divergence. For validation, we perform a series of experiments on FashionMNIST as well as on a medical task including >1000 healthy and >250 brain tumor patients. Results show that the proposed formalism outperforms the state of the art VAE-based localization of anomalies across many hyperparameter settings and also shows a competitive max performance.) <|cite_end|>, Generative Adversarial Networks <|cite_start|> (Reference: f‐AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery: ) <|cite_end|>, or combinations and variations of these frameworks <|cite_start|> (Reference: Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Lesion Detection via Image Restoration with a Normative Prior: Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Detection of Lesions in Brain MRI using constrained adversarial auto-encoders: Lesion detection in brain Magnetic Resonance Images (MRI) remains a challenging task. State-of-the-art approaches are mostly based on supervised learning making use of large annotated datasets. Human beings, on the other hand, even non-experts, can detect most abnormal lesions after seeing a handful of healthy brain images. Replicating this capability of using prior information on the appearance of healthy brain structure to detect lesions can help computers achieve human level abnormality detection, specifically reducing the need for numerous labeled examples and bettering generalization of previously unseen lesions. To this end, we study detection of lesion regions in an unsupervised manner by learning data distribution of brain MRI of healthy subjects using auto-encoder based methods. We hypothesize that one of the main limitations of the current models is the lack of consistency in latent representation. We propose a simple yet effective constraint that helps mapping of an image bearing lesion close to its corresponding healthy image in the latent space. We use the Human Connectome Project dataset to learn distribution of healthy-appearing brain MRI and report improved detection, in terms of AUC, of the lesions in the BRATS challenge dataset.) <|cite_end|>have been explored.
During inference, such models use the residual $\mathbf{r =\left| x - \hat{x}\right|}$ between the input image $\mathbf{{x}}$ and its reconstruction $\mathbf{\hat{x}}$ to generate a saliency map.
As a novel extension of the reconstruction paradigm, the restoration-approach <|cite_start|> (Reference: Unsupervised Lesion Detection via Image Restoration with a Normative Prior: Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.) <|cite_end|>, an input image is iteratively updated until its anomalous regions are replaced with quasi-healthy ones.
Baur \textit{et al.} <|cite_start|> (Reference: Autoencoders for Unsupervised Anomaly Segmentation in Brain MR Images: A Comparative Study: Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data---a necessity for and pitfall of current supervised Deep Learning---and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.) <|cite_end|>performed a systematic evaluation of a collection of image-reconstruction UPD algorithms in the context of brain MR imaging. We refer the interested reader to their work, for a thorough explanation of the above concepts.
\subsection{Feature-modeling Methods}
Newly developed algorithms, mainly from the field of industrial inspection with UAD, have started deviating from the image-reconstruction norm. Instead of working on the image directly, \textit{feature-modeling} methods leverage frozen, pre-trained encoders to first transform each input sample to an alternative, semantically-rich representation, which they proceed to manipulate and model with a variety of techniques in order to perform anomaly detection.
A popular research direction adopts the student-teacher learning paradigm <|cite_start|> (Reference: Student-Teacher Feature Pyramid Matching for Anomaly Detection: Anomaly detection is a challenging task and usually formulated as an one-class learning problem for the unexpectedness of anomalies. This paper proposes a simple yet powerful approach to this issue, which is implemented in the student-teacher framework for its advantages but substantially extends it in terms of both accuracy and efficiency. Given a strong model pre-trained on image classification as the teacher, we distill the knowledge into a single student network with the identical architecture to learn the distribution of anomaly-free images and this one-step transfer preserves the crucial clues as much as possible. Moreover, we integrate the multi-scale feature matching strategy into the framework, and this hierarchical feature matching enables the student network to receive a mixture of multi-level knowledge from the feature pyramid under better supervision, thus allowing to detect anomalies of various sizes. The difference between feature pyramids generated by the two networks serves as a scoring function indicating the probability of anomaly occurring. Due to such operations, our approach achieves accurate and fast pixel-level anomaly detection. Very competitive results are delivered on the MVTec anomaly detection dataset, superior to the state of the art ones.) <|cite_end|> <|cite_start|> (Reference: Anomaly Detection via Reverse Distillation from One-Class Embedding: Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD).The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD. However, using similar or identical architectures to build the teacher and student models in previous studies hinders the diversity of anomalous representations. To tackle this problem, we propose a novel T-S model consisting of a teacher encoder and a student decoder and introduce a simple yet effective "reverse distillation" paradigm accordingly. Instead of receiving raw images directly, the student network takes teacher model's one-class embedding as input and targets to restore the teacher's multiscale representations. Inherently, knowledge distillation in this study starts from abstract, high-level presentations to low-level features. In addition, we introduce a trainable one-class bottleneck embedding (OCBE) module in our T-S model. The obtained compact embedding effectively preserves essential information on normal patterns, but abandons anomaly perturbations. Extensive experimentation on AD and one-class novelty detection benchmarks shows that our method surpasses SOTA performance, demonstrating our proposed approach's effectiveness and generalizability.) <|cite_end|> <|cite_start|> (Reference: Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings: We introduce a powerful student-teacher framework for the challenging problem of unsupervised anomaly detection and pixel-precise anomaly segmentation in high-resolution images. Student networks are trained to regress the output of a descriptive teacher network that was pretrained on a large dataset of patches from natural images. This circumvents the need for prior data annotation. Anomalies are detected when the outputs of the student networks differ from that of the teacher network. This happens when they fail to generalize outside the manifold of anomaly-free training data. The intrinsic uncertainty in the student networks is used as an additional scoring function that indicates anomalies. We compare our method to a large number of existing deep learning based methods for unsupervised anomaly detection. Our experiments demonstrate improvements over state-of-the-art methods on a number of real-world datasets, including the recently introduced MVTec Anomaly Detection dataset that was specifically designed to benchmark anomaly segmentation algorithms.) <|cite_end|>, distilling knowledge from the pre-trained encoder to a student network. By enforcing similarity in student and teacher activations during training, representation discrepancies can reveal anomalies during inference.
Feature-modeling generative approaches are also gaining traction, with researchers using normalizing flow (NFLOW) networks to model normal embeddings in an attempt to estimate exact likelihoods <|cite_start|> (Reference: CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows: Unsupervised anomaly detection with localization has many practical applications when labeling is infeasible and, moreover, when anomaly examples are completely missing in the train data. While recently proposed models for such data setup achieve high accuracy metrics, their complexity is a limiting factor for real-time processing. In this paper, we propose a real-time model and analytically derive its relationship to prior methods. Our CFLOW-AD model is based on a conditional normalizing flow frame- work adopted for anomaly detection with localization. In particular, CFLOW-AD consists of a discriminatively pretrained encoder followed by a multi-scale generative de- coders where the latter explicitly estimate likelihood of the encoded features. Our approach results in a computationally and memory-efficient model: CFLOW-AD is faster and smaller by a factor of 10× than prior state-of-the-art with the same input setting. Our experiments on the MVTec dataset show that CFLOW-AD outperforms previous methods by 0.36% AUROC in detection task, by 1.12% AUROC and 2.5% AUPRO in localization task, respectively. We open-source our code with fully reproducible experiments1.) <|cite_end|> <|cite_start|> (Reference: FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows: Unsupervised anomaly detection and localization is crucial to the practical application when collecting and labeling sufficient anomaly data is infeasible. Most existing representation-based approaches extract normal image features with a deep convolutional neural network and characterize the corresponding distribution through non-parametric distribution estimation methods. The anomaly score is calculated by measuring the distance between the feature of the test image and the estimated distribution. However, current methods can not effectively map image features to a tractable base distribution and ignore the relationship between local and global features which are important to identify anomalies. To this end, we propose FastFlow implemented with 2D normalizing flows and use it as the probability distribution estimator. Our FastFlow can be used as a plug-in module with arbitrary deep feature extractors such as ResNet and vision transformer for unsupervised anomaly detection and localization. In training phase, FastFlow learns to transform the input visual feature into a tractable distribution and obtains the likelihood to recognize anomalies in inference phase. Extensive experimental results on the MVTec AD dataset show that FastFlow surpasses previous state-of-the-art methods in terms of accuracy and inference efficiency with various backbone networks. Our approach achieves 99.4% AUC in anomaly detection with high inference efficiency.) <|cite_end|>.
Even simpler statistical baselines that do not require gradient optimization have achieved SOTA detection and segmentation performance. Such methods act directly on pooled features and often employ Multivariate Gaussian <|cite_start|> (Reference: Modeling the Distribution of Normal Data in Pre-Trained Deep Features for Anomaly Detection: Anomaly Detection (AD) in images is a fundamental computer vision problem and refers to identifying images and image substructures that deviate significantly from the norm. Popular AD algorithms commonly try to learn a model of normality from scratch using task specific datasets, but are limited to semi-supervised approaches employing mostly normal data due to the inaccessibility of anomalies on a large scale combined with the ambiguous nature of anomaly appearance. We follow an alternative approach and demonstrate that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality and detect even subtle anomalies in a transfer learning setting. Our model of normality is established by fitting a multivariate Gaussian (MVG) to deep feature representations of classification networks trained on ImageNet using normal data only. By subsequently applying the Mahalanobis distance as the anomaly score we outperform the current state of the art on the public MVTec AD dataset, achieving an AUROC value of $95.8 \pm 1.2$ (mean $\pm$ SEM) over all 15 classes. We further investigate why the learned representations are discriminative to the AD task using Principal Component Analysis. We find that the principal components containing little variance in normal data are the ones crucial for discriminating between normal and anomalous instances. This gives a possible explanation to the often sub-par performance of AD approaches trained from scratch using normal data only. By selectively fitting a MVG to these most relevant components only, we are able to further reduce model complexity while retaining AD performance. We also investigate setting the working point by selecting acceptable False Positive Rate thresholds based on the MVG assumption. Code available at https://github.com/ORippler/gaussian-ad-mvtec) <|cite_end|> <|cite_start|> (Reference: PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization: We present a new framework for Patch Distribution Modeling, PaDiM, to concurrently detect and localize anomalies in images in a one-class learning setting. PaDiM makes use of a pretrained convolutional neural network (CNN) for patch embedding, and of multivariate Gaussian distributions to get a probabilistic representation of the normal class. It also exploits correlations between the different semantic levels of CNN to better localize anomalies. PaDiM outperforms current state-of-the-art approaches for both anomaly detection and localization on the MVTec AD and STC datasets. To match real-world visual industrial inspection, we extend the evaluation protocol to assess performance of anomaly localization algorithms on non-aligned dataset. The state-of-the-art performance and low complexity of PaDiM make it a good candidate for many industrial applications.) <|cite_end|>or K-nearest neighbor <|cite_start|> (Reference: PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation: Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pretrained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pretrained features with simple anomaly detection and segmentation methods convincingly outperforms, much more complex, state-of-the-art methods. In order to obtain further performance gains in anomaly detection, we adapt pretrained features to the target distribution. Although transfer learning methods are well established in multi-class classification problems, the one-class classification (OCC) setting is not as well explored. It turns out that naive adaptation methods, which typically work well in supervised learning, often result in catastrophic collapse (feature deterioration) and reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates using specialized architectures, but this limits the adaptation performance gain. We propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. Our method, PANDA, outperforms the state-of-the-art in the OCC, outlier exposure and anomaly segmentation settings by large margins.) <|cite_end|> <|cite_start|> (Reference: Sub-Image Anomaly Detection with Deep Pyramid Correspondences: Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images. A limitation of kNN methods is the lack of segmentation map describing where the anomaly lies inside the image. In this work we present a novel anomaly segmentation approach based on alignment between an anomalous image and a constant number of the similar normal images. Our method, Semantic Pyramid Anomaly Detection (SPADE) uses correspondences based on a multi-resolution feature pyramid. SPADE is shown to achieve state-of-the-art performance on unsupervised anomaly detection and localization while requiring virtually no training time.) <|cite_end|>modeling to capture normality. Impressive in their simplicity and detection capabilities, these approaches often exhibit long inference times, limiting their practical applications.
Lastly, the image-reconstruction approach has been successfully applied to feature-embeddings <|cite_start|> (Reference: Unsupervised Anomaly Localization with Structural Feature-Autoencoders: Unsupervised Anomaly Detection has become a popular method to detect pathologies in medical images as it does not require supervision or labels for training. Most commonly, the anomaly detection model generates a "normal" version of an input image, and the pixel-wise $l^p$-difference of the two is used to localize anomalies. However, large residuals often occur due to imperfect reconstruction of the complex anatomical structures present in most medical images. This method also fails to detect anomalies that are not characterized by large intensity differences to the surrounding tissue. We propose to tackle this problem using a feature-mapping function that transforms the input intensity images into a space with multiple channels where anomalies can be detected along different discriminative feature maps extracted from the original image. We then train an Autoencoder model in this space using structural similarity loss that does not only consider differences in intensity but also in contrast and structure. Our method significantly increases performance on two medical data sets for brain MRI. Code and experiments are available at https://github.com/FeliMe/feature-autoencoder) <|cite_end|>.
\subsection{Attention-based Methods}
When learning normality with a machine learning model, using attention maps from layer-activations or the gradient of the normality formulation is a natural fit to extract localization information from the trained model. This principle was applied by Zimmerer \textit{et al.} in one of their model variants in <|cite_start|> (Reference: Unsupervised Anomaly Localization using Variational Auto-Encoders: An assumption-free automatic check of medical images for potentially overseen anomalies would be a valuable assistance for a radiologist. Deep learning and especially Variational Auto-Encoders (VAEs) have shown great potential in the unsupervised learning of data distributions. In principle, this allows for such a check and even the localization of parts in the image that are most suspicious. Currently, however, the reconstruction-based localization by design requires adjusting the model architecture to the specific problem looked at during evaluation. This contradicts the principle of building assumption-free models. We propose complementing the localization part with a term derived from the Kullback-Leibler (KL)-divergence. For validation, we perform a series of experiments on FashionMNIST as well as on a medical task including >1000 healthy and >250 brain tumor patients. Results show that the proposed formalism outperforms the state of the art VAE-based localization of anomalies across many hyperparameter settings and also shows a competitive max performance.) <|cite_end|>, and by Liu \textit{et al.}, Venkataramanan \textit{et al.}, and Silva-Rodriguez \textit{et al.} <|cite_start|> (Reference: Towards Visually Explaining Variational Autoencoders: Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g. variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.) <|cite_end|> <|cite_start|> (Reference: Attention Guided Anomaly Localization in Images: Anomaly localization is an important problem in computer vision which involves localizing anomalous regions within images with applications in industrial inspection, surveillance, and medical imaging. This task is challenging due to the small sample size and pixel coverage of the anomaly in real-world scenarios. Most prior works need to use anomalous training images to compute a class-specific threshold to localize anomalies. Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information. In the unsupervised setting, we propose an attention expansion loss where we encourage CAVGA to focus on all normal regions in the image. Furthermore, in the weakly-supervised setting we propose a complementary guided attention loss, where we encourage the attention map to focus on all normal regions while minimizing the attention map corresponding to anomalous regions in the image. CAVGA outperforms the state-of-the-art (SOTA) anomaly localization methods on MVTec Anomaly Detection (MVTAD), modified ShanghaiTech Campus (mSTC) and Large-scale Attention based Glaucoma (LAG) datasets in the unsupervised setting and when using only 2% anomalous images in the weakly-supervised setting. CAVGA also outperforms SOTA anomaly detection methods on the MNIST, CIFAR-10, Fashion-MNIST, MVTAD, mSTC and LAG datasets.) <|cite_end|> <|cite_start|> (Reference: Constrained unsupervised anomaly segmentation: Current unsupervised anomaly localization approaches rely on generative models to learn the distribution of normal images, which is later used to identify potential anomalous regions derived from errors on the reconstructed images. However, a main limitation of nearly all prior literature is the need of employing anomalous images to set a class-specific threshold to locate the anomalies. This limits their usability in realistic scenarios, where only normal data is typically accessible. Despite this major drawback, only a handful of works have addressed this limitation, by integrating supervision on attention maps during training. In this work, we propose a novel formulation that does not require accessing images with abnormalities to define the threshold. Furthermore, and in contrast to very recent work, the proposed constraint is formulated in a more principled manner, leveraging well-known knowledge in constrained optimization. In particular, the equality constraint on the attention maps in prior work is replaced by an inequality constraint, which allows more flexibility. In addition, to address the limitations of penalty-based functions we employ an extension of the popular log-barrier methods to handle the constraint. Last, we propose an alternative regularization term that maximizes the Shannon entropy of the attention maps, reducing the amount of hyperparameters of the proposed model. Comprehensive experiments on two publicly available datasets on brain lesion segmentation demonstrate that the proposed approach substantially outperforms relevant literature, establishing new state-of-the-art results for unsupervised lesion segmentation, and without the need to access anomalous images.) <|cite_end|>in the form of GradCAM <|cite_start|> (Reference: Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization: We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for predicting the concept. Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers, (2) CNNs used for structured outputs, (3) CNNs used in tasks with multimodal inputs or reinforcement learning, without any architectural changes or re-training. We combine Grad-CAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes, (b) are robust to adversarial images, (c) outperform previous methods on localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, we show that even non-attention based models can localize inputs. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM helps users establish appropriate trust in predictions from models and show that Grad-CAM helps untrained users successfully discern a 'stronger' nodel from a 'weaker' one even when both make identical predictions. Our code is available at https://github.com/ramprs/grad-cam/, along with a demo at http://gradcam.cloudcv.org, and a video at youtu.be/COjUB9Izk6E.) <|cite_end|>maps.
\subsection{Self-supervised Anomaly Detection Methods}
Self-supervised methods are gaining popularity in both the medical and industrial inspection fields, with researchers designing and performing pre-text tasks on normal data to perform UAD.
One popular research direction utilizes pre-text tasks in order to initially perform representation learning, and then proceeds to model the representation distribution of normal instances <|cite_start|> (Reference: Patch SVDD: Patch-level SVDD for Anomaly Detection and Segmentation: In this paper, we address the problem of image anomaly detection and segmentation. Anomaly detection involves making a binary decision as to whether an input image contains an anomaly, and anomaly segmentation aims to locate the anomaly on the pixel level. Support vector data description (SVDD) is a long-standing algorithm used for an anomaly detection, and we extend its deep learning variant to the patch-based method using self-supervised learning. This extension enables anomaly segmentation and improves detection performance. As a result, anomaly detection and segmentation performances measured in AUROC on MVTec AD dataset increased by 9.8% and 7.0%, respectively, compared to the previous state-of-the-art methods. Our results indicate the efficacy of the proposed method and its potential for industrial application. Detailed analysis of the proposed method offers insights regarding its behavior, and the code is available online.) <|cite_end|> <|cite_start|> (Reference: CutPaste: Self-Supervised Learning for Anomaly Detection and Localization: We aim at constructing a high performance model for defect detection that detects unknown anomalous patterns of an image without anomalous data. To this end, we propose a two-stage framework for building anomaly detectors using normal training data only. We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations. We learn representations by classifying normal data from the CutPaste, a simple data augmentation strategy that cuts an image patch and pastes at a random location of a large image. Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects. We bring the improvement upon previous arts by 3.1 AUCs when learning representations from scratch. By transfer learning on pretrained representations on ImageNet, we achieve a new state-of-theart 96.6 AUC. Lastly, we extend the framework to learn and extract representations from patches to allow localizing defective areas without annotations during training.) <|cite_end|>. Such methods detect anomalies as outliers from the modeled distribution, premised on the notion that the trained model can generalize and effectively map anomalous inputs in out-of-distribution areas of the representation space.
Another approach involves the application of synthetic anomalies on otherwise normal training data and the adoption of common supervised techniques to explicitly <|cite_start|> (Reference: Detecting Outliers with Poisson Image Interpolation: Supervised learning of every possible pathology is unrealistic for many primary care applications like health screening. Image anomaly detection methods that learn normal appearance from only healthy data have shown promising results recently. We propose an alternative to image reconstruction-based and image embedding-based methods and propose a new self-supervised method to tackle pathological anomaly detection. Our approach originates in the foreign patch interpolation (FPI) strategy that has shown superior performance on brain MRI and abdominal CT data. We propose to use a better patch interpolation strategy, Poisson image interpolation (PII), which makes our method suitable for applications in challenging data regimes. PII outperforms state-of-the-art methods by a good margin when tested on surrogate tasks like identifying common lung anomalies in chest X-rays or hypo-plastic left heart syndrome in prenatal, fetal cardiac ultrasound images. Code available at https://github.com/jemtan/PII.) <|cite_end|> <|cite_start|> (Reference: AutoSeg - Steering the Inductive Biases for Automatic Pathology Segmentation: ) <|cite_end|>or implicitly <|cite_start|> (Reference: {Denoising autoencoders for unsupervised anomaly detection in brain MRI: Pathological brain lesions exhibit diverse appearance in brain images, making it difficult to train supervised detection solutions due to the lack of comprehensive data and annotations. Thus, in this work we tackle unsupervised anomaly detection, using only healthy data for training with the aim of detecting unseen anomalies at test time. Many current approaches employ autoencoders with restrictive architectures (i.e. containing information bottlenecks) that tend to give poor reconstructions of not only the anomalous but also the normal parts of the brain. Instead, we investigate classical denoising autoencoder models that do not require bottlenecks and can employ skip connections to give high resolution fidelity. We design a simple noise generation method of upscaling low-resolution noise that enables high-quality reconstructions. We find that with appropriate noise generation, denoising autoencoder reconstruction errors generalize to hyperintense lesion segmentation and reach state of the art performance for unsupervised tumor detection in brain MRI data, beating more complex methods such as variational autoencoders. We believe this provides a strong and easy-to-implement baseline for further research into unsupervised anomaly detection.) <|cite_end|>localize them.
\subsection{Self-supervised Pre-training Methods}
Apart from anomaly detection, self-supervised methods have predominantly been used to
learn useful representations by performing pretext tasks on unlabeled data | [
"<|reference_start|> Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images: <|reference_end|>",
"<|reference_start|> Attention Guided Anomaly Localization in Images: Anomaly localization is an important problem in computer vision which involves localizing anomalous regions within images with applications in industrial inspection, surveillance, and medical imaging. This task is challenging due to the small sample size and pixel coverage of the anomaly in real-world scenarios. Most prior works need to use anomalous training images to compute a class-specific threshold to localize anomalies. Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information. In the unsupervised setting, we propose an attention expansion loss where we encourage CAVGA to focus on all normal regions in the image. Furthermore, in the weakly-supervised setting we propose a complementary guided attention loss, where we encourage the attention map to focus on all normal regions while minimizing the attention map corresponding to anomalous regions in the image. CAVGA outperforms the state-of-the-art (SOTA) anomaly localization methods on MVTec Anomaly Detection (MVTAD), modified ShanghaiTech Campus (mSTC) and Large-scale Attention based Glaucoma (LAG) datasets in the unsupervised setting and when using only 2% anomalous images in the weakly-supervised setting. CAVGA also outperforms SOTA anomaly detection methods on the MNIST, CIFAR-10, Fashion-MNIST, MVTAD, mSTC and LAG datasets. <|reference_end|>",
"<|reference_start|> FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows: Unsupervised anomaly detection and localization is crucial to the practical application when collecting and labeling sufficient anomaly data is infeasible. Most existing representation-based approaches extract normal image features with a deep convolutional neural network and characterize the corresponding distribution through non-parametric distribution estimation methods. The anomaly score is calculated by measuring the distance between the feature of the test image and the estimated distribution. However, current methods can not effectively map image features to a tractable base distribution and ignore the relationship between local and global features which are important to identify anomalies. To this end, we propose FastFlow implemented with 2D normalizing flows and use it as the probability distribution estimator. Our FastFlow can be used as a plug-in module with arbitrary deep feature extractors such as ResNet and vision transformer for unsupervised anomaly detection and localization. In training phase, FastFlow learns to transform the input visual feature into a tractable distribution and obtains the likelihood to recognize anomalies in inference phase. Extensive experimental results on the MVTec AD dataset show that FastFlow surpasses previous state-of-the-art methods in terms of accuracy and inference efficiency with various backbone networks. Our approach achieves 99.4% AUC in anomaly detection with high inference efficiency. <|reference_end|>",
"<|reference_start|> {Denoising autoencoders for unsupervised anomaly detection in brain MRI: Pathological brain lesions exhibit diverse appearance in brain images, making it difficult to train supervised detection solutions due to the lack of comprehensive data and annotations. Thus, in this work we tackle unsupervised anomaly detection, using only healthy data for training with the aim of detecting unseen anomalies at test time. Many current approaches employ autoencoders with restrictive architectures (i.e. containing information bottlenecks) that tend to give poor reconstructions of not only the anomalous but also the normal parts of the brain. Instead, we investigate classical denoising autoencoder models that do not require bottlenecks and can employ skip connections to give high resolution fidelity. We design a simple noise generation method of upscaling low-resolution noise that enables high-quality reconstructions. We find that with appropriate noise generation, denoising autoencoder reconstruction errors generalize to hyperintense lesion segmentation and reach state of the art performance for unsupervised tumor detection in brain MRI data, beating more complex methods such as variational autoencoders. We believe this provides a strong and easy-to-implement baseline for further research into unsupervised anomaly detection. <|reference_end|>"
] | [
0,
27,
48,
63
] | {"<|multi_cite_1_1|>": "ss-894499", "<|multi_cite_1_2|>": "arxiv-257970", "<|multi_cite_1_3|>": "ss-740623", "<|multi_cite_1_4|>": "ss-682863", "<|multi_cite_1_5|>": "arxiv-213152", "<|multi_cite_1_6|>": "arxiv-262696", "<|multi_cite_1_7|>": "ss-925863", "<|cite_2|>": "ss-682263", "<|cite_3|>": "arxiv-369788", "<|multi_cite_4_1|>": "arxiv-257970", "<|multi_cite_4_2|>": "ss-925863", "<|multi_cite_4_3|>": "ss-740623", "<|multi_cite_4_4|>": "arxiv-213152", "<|multi_cite_4_5|>": "ss-894499", "<|multi_cite_4_6|>": "arxiv-262696", "<|multi_cite_4_7|>": "ss-1528266", "<|multi_cite_5_1|>": "arxiv-309724", "<|multi_cite_5_2|>": "arxiv-441874", "<|multi_cite_5_3|>": "arxiv-325812", "<|multi_cite_5_4|>": "arxiv-394842", "<|multi_cite_5_5|>": "arxiv-232756", "<|multi_cite_5_6|>": "ss-1351643", "<|multi_cite_5_7|>": "arxiv-381038", "<|multi_cite_5_8|>": "arxiv-268224", "<|multi_cite_5_9|>": "arxiv-304212", "<|multi_cite_5_10|>": "arxiv-295703", "<|multi_cite_5_11|>": "arxiv-263667", "<|multi_cite_6_1|>": "arxiv-235197", "<|multi_cite_6_2|>": "arxiv-234658", "<|multi_cite_6_3|>": "arxiv-403069", "<|multi_cite_7_1|>": "arxiv-353281", "<|multi_cite_7_2|>": "arxiv-333160", "<|multi_cite_7_3|>": "ss-735621", "<|multi_cite_7_4|>": "arxiv-275114", "<|multi_cite_7_5|>": "ss-925864", "<|cite_8|>": "ss-925863", "<|cite_9|>": "arxiv-213152", "<|multi_cite_10_1|>": "ss-740623", "<|multi_cite_10_2|>": "ss-682863", "<|multi_cite_11_1|>": "ss-894499", "<|multi_cite_11_2|>": "arxiv-262696", "<|multi_cite_11_3|>": "ss-1528266", "<|cite_12|>": "arxiv-262696", "<|cite_13|>": "arxiv-257970", "<|multi_cite_14_1|>": "arxiv-325812", "<|multi_cite_14_2|>": "arxiv-394842", "<|multi_cite_14_3|>": "arxiv-232756", "<|multi_cite_15_1|>": "ss-1351643", "<|multi_cite_15_2|>": "arxiv-381038", "<|multi_cite_16_1|>": "arxiv-268224", "<|multi_cite_16_2|>": "arxiv-304212", "<|multi_cite_17_1|>": "arxiv-295703", "<|multi_cite_17_2|>": "arxiv-263667", "<|cite_18|>": "arxiv-441874", "<|cite_19|>": "arxiv-213152", "<|multi_cite_20_1|>": "arxiv-234658", "<|multi_cite_20_2|>": "arxiv-235197", "<|multi_cite_20_3|>": "arxiv-403069", "<|cite_21|>": "arxiv-107450", "<|multi_cite_22_1|>": "arxiv-275114", "<|multi_cite_22_2|>": "arxiv-333160", "<|multi_cite_23_1|>": "arxiv-353281", "<|multi_cite_23_2|>": "ss-925864", "<|cite_24|>": "ss-735621", "<|cite_25|>": "arxiv-248169", "<|multi_cite_26_1|>": "arxiv-160285", "<|multi_cite_26_2|>": "arxiv-212069", "<|multi_cite_27_1|>": "arxiv-248169", "<|multi_cite_27_2|>": "arxiv-234041"} |
2207.04507-0 | <|paper_start|> Title: Closing the Gap Between Directed Hopsets and Shortcut Sets
Abstract: Closing the Gap Between Directed Hopsets and Shortcut Sets: For an n-vertex directed graph $G = (V,E)$, a $\beta$-\emph{shortcut set} $H$ is a set of additional edges $H \subseteq V \times V$ such that $G \cup H$ has the same transitive closure as $G$, and for every pair $u,v \in V$, there is a $uv$-path in $G \cup H$ with at most $\beta$ edges. A natural generalization of shortcut sets to distances is a $(\beta,\epsilon)$-\emph{hopset} $H \subseteq V \times V$, where the requirement is that $H$ and $G \cup H$ have the same shortest-path distances, and for every $u,v \in V$, there is a $(1+\epsilon)$-approximate shortest path in $G \cup H$ with at most $\beta$ edges. There is a large literature on the tradeoff between the size of a shortcut set / hopset and the value of $\beta$. We highlight the most natural point on this tradeoff: what is the minimum value of $\beta$, such that for any graph $G$, there exists a $\beta$-shortcut set (or a $(\beta,\epsilon)$-hopset) with $O(n)$ edges? Not only is this a natural structural question in its own right, but shortcuts sets / hopsets form the core of many distributed, parallel, and dynamic algorithms for reachability / shortest paths. Until very recently the best known upper bound was a folklore construction showing $\beta = O(n^{1/2})$, but in a breakthrough result Kogan and Parter [SODA 2022] improve this to $\beta = \tilde{O}(n^{1/3})$ for shortcut sets and $\tilde{O}(n^{2/5})$ for hopsets. Our result is to close the gap between shortcut sets and hopsets. That is, we show that for any graph $G$ and any fixed $\epsilon$ there is a $(\tilde{O}(n^{1/3}),\epsilon)$ hopset with $O(n)$ edges. More generally, we achieve a smooth tradeoff between hopset size and $\beta$ which exactly matches the tradeoff of Kogan and Parter for shortcut sets (up to polylog factors). Using a very recent black-box reduction of Kogan and Parter, our new hopset implies improved bounds for approximate distance preservers.
Introduction
Computing reachability and shortest paths in a directed graph is one of the most fundamental problems in graph algorithms. In a wide range of settings, these problems turn out to be easier if the path in question contains few edges (even if the edges have high weight). This dependency motivated the notion of \textit{shortcut sets}, introduced by Thorup <|cite_start|> (Reference: On Shortcutting Digraphs: ) <|cite_end|>. Given a graph $G$, the goal is to add a new set of edges $H$ such that $G \bigcup H$ has the same transitive closure as $G$, but for any pair of vertices $x,y$ there is an $xy$-path in $G \bigcup H$ with few edges. A generalization of a shortcut set is the notion of a \textit{hopset} $H$, where the goal is to preserve not just reachability but (weighted) distances. We now formally define both these notions.
\begin{definition}[Shortcut Set]
\label{def:shortcut-set}
Given an unweighted graph $G=(V,E)$, a $\beta$-shortcut set is a set of edges $H \subseteq V \times V$ such that for all $u,v \in V$ the following holds: {\bf (1)} $u$ can reach $v$ in $G \cup H$ if and only if $u$ can reach $v$ in $G$ and {\bf (2)} if $u$ can reach $v$ in $G \cup H$ then there is a $uv$-path in $G \cup H$ with at most $\beta$ edges.
\end{definition}
\begin{definition}[Hopset]
\label{def:hopset}
Given a graph $G=(V,E)$ with non-negative weights, a $(\beta,\eps)$-hopset is a set of edges $H \subseteq V \times V$ with non-negative weights such that such that for all $u,v \in V$ the following holds: {\bf (1)} $\dist_G(u,v) = \dist_{G \cup H}(u,v)$, where $\dist(u,v)$ refers to the weighted distance between $u$ and $v$. And {\bf (2)} If $u$ can reach $v$ then there is a $uv$-path $P_{uv}$ in $G \cup H$ such that $P_{uv}$ contains at most $\beta$ edges and the weight of $P_{uv}$ is at most $(1+\eps) \dist(u,v)$
\end{definition}
\paragraph{Motivation}
There are two trivial extremes for hopset construction. Setting $H = \emptyset$ yields a $(n,0)$ hopset. On the other hand, if for every pair of vertices $u,v$ we add an edge to $H$ with $w(u,v) = \dist(u,v)$ then this yields a $(1,0)$ hopset with $n^2$ edges. The interesting question is thus to achieve a tradeoff between the number of edges and the parameter $\beta$. Perhaps the most natural setting to consider is as follows: if we restrict $H$ to contain $O(n)$ edges, what is the minimum $\beta$ we can guarantee?
The above is a natural question in extremal graph theory in its own right. But hopsets also play a crucial role in computing reachability and shortest path in directed graphs in a wide variety of models of computation such as distributed, parallel, and dynamic algorithms <|cite_start|> (Reference: A Randomized Parallel Algorithm for Single-Source Shortest Paths: We give a randomized parallel algorithm for computing single-source shortest paths in weighted digraphs. We show that the exact shortest-path problem can be efficiently reduced to solving a series of approximate shortest-path subproblems. Our algorithm for the approximate shortest-path problem is based on the technique used by Ullman and Yannakakis in a parallel algorithm for breadth-first search.) <|cite_end|> <|cite_start|> (Reference: Sublinear-Time Decremental Algorithms for Single-Source Reachability and Shortest Paths on Directed Graphs: We consider dynamic algorithms for maintaining Single-Source Reachability (SSR) and approximate Single-Source Shortest Paths (SSSP) on $n$-node $m$-edge directed graphs under edge deletions (decremental algorithms). The previous fastest algorithm for SSR and SSSP goes back three decades to Even and Shiloach [JACM 1981]; it has $ O(1) $ query time and $ O (mn) $ total update time (i.e., linear amortized update time if all edges are deleted). This algorithm serves as a building block for several other dynamic algorithms. The question whether its total update time can be improved is a major, long standing, open problem. In this paper, we answer this question affirmatively. We obtain a randomized algorithm with an expected total update time of $ O(\min (m^{7/6} n^{2/3 + o(1)}, m^{3/4} n^{5/4 + o(1)}) ) = O (m n^{9/10 + o(1)}) $ for SSR and $(1+\epsilon)$-approximate SSSP if the edge weights are integers from $ 1 $ to $ W \leq 2^{\log^c{n}} $ and $ \epsilon \geq 1 / \log^c{n} $ for some constant $ c $. We also extend our algorithm to achieve roughly the same running time for Strongly Connected Components (SCC), improving the algorithm of Roditty and Zwick [FOCS 2002]. Our algorithm is most efficient for sparse and dense graphs. When $ m = \Theta(n) $ its running time is $ O (n^{1 + 5/6 + o(1)}) $ and when $ m = \Theta(n^2) $ its running time is $ O (n^{2 + 3/4 + o(1)}) $. For SSR we also obtain an algorithm that is faster for dense graphs and has a total update time of $ O ( m^{2/3} n^{4/3 + o(1)} + m^{3/7} n^{12/7 + o(1)}) $ which is $ O (n^{2 + 2/3}) $ when $ m = \Theta(n^2) $. All our algorithms have constant query time in the worst case and are correct with high probability against an oblivious adversary.) <|cite_end|> <|cite_start|> (Reference: Improved Algorithms for Decremental Single-Source Reachability on Directed Graphs: Recently we presented the first algorithm for maintaining the set of nodes reachable from a source node in a directed graph that is modified by edge deletions with $o(mn)$ total update time, where $m$ is the number of edges and $n$ is the number of nodes in the graph [Henzinger et al. STOC 2014]. The algorithm is a combination of several different algorithms, each for a different $m$ vs. $n$ trade-off. For the case of $m = \Theta(n^{1.5})$ the running time is $O(n^{2.47})$, just barely below $mn = \Theta(n^{2.5})$. In this paper we simplify the previous algorithm using new algorithmic ideas and achieve an improved running time of $\tilde O(\min(m^{7/6} n^{2/3}, m^{3/4} n^{5/4 + o(1)}, m^{2/3} n^{4/3+o(1)} + m^{3/7} n^{12/7+o(1)}))$. This gives, e.g., $O(n^{2.36})$ for the notorious case $m = \Theta(n^{1.5})$. We obtain the same upper bounds for the problem of maintaining the strongly connected components of a directed graph undergoing edge deletions. Our algorithms are correct with high probabililty against an oblivious adversary.) <|cite_end|> <|cite_start|> (Reference: A Faster Distributed Single-Source Shortest Paths Algorithm: We devise new algorithms for the single-source shortest paths (SSSP) problem with non-negative edge weights in the CONGEST model of distributed computing. While close-to-optimal solutions, in terms of the number of rounds spent by the algorithm, have recently been developed for computing SSSP approximately, the fastest known exact algorithms are still far away from matching the lower bound of $ \tilde \Omega (\sqrt{n} + D) $ rounds by Peleg and Rubinovich [SIAM Journal on Computing 2000], where $ n $ is the number of nodes in the network and $ D $ is its diameter. The state of the art is Elkin's randomized algorithm [STOC 2017] that performs $ \tilde O(n^{2/3} D^{1/3} + n^{5/6}) $ rounds. We significantly improve upon this upper bound with our two new randomized algorithms for polynomially bounded integer edge weights, the first performing $ \tilde O (\sqrt{n D}) $ rounds and the second performing $ \tilde O (\sqrt{n} D^{1/4} + n^{3/5} + D) $ rounds. Our bounds also compare favorably to the independent result by Ghaffari and Li [STOC 2018]. As side results, we obtain a $ (1 + \epsilon) $-approximation $ \tilde O ((\sqrt{n} D^{1/4} + D) / \epsilon) $-round algorithm for directed SSSP and a new work/depth trade-off for exact SSSP on directed graphs in the PRAM model.) <|cite_end|> <|cite_start|> (Reference: Parallel Reachability in Almost Linear Work and Square Root Depth: In this paper we provide a parallel algorithm that given any $n$-node $m$-edge directed graph and source vertex $s$ computes all vertices reachable from $s$ with $\tilde{O}(m)$ work and $n^{1/2 + o(1)}$ depth with high probability in $n$ . This algorithm also computes a set of $\tilde{O}(n)$ edges which when added to the graph preserves reachability and ensures that the diameter of the resulting graph is at most $n^{1/2 + o(1)}$. Our result improves upon the previous best known almost linear work reachability algorithm due to Fineman which had depth $\tilde{O}(n^{2/3})$. Further, we show how to leverage this algorithm to achieve improved distributed algorithms for single source reachability in the CONGEST model. In particular, we provide a distributed algorithm that given a $n$-node digraph of undirected hop-diameter $D$ solves the single source reachability problem with $\tilde{O}(n^{1/2} + n^{1/3 + o(1)} D^{2/3})$ rounds of the communication in the CONGEST model with high probability in $n$. Our algorithm is nearly optimal whenever $D = O(n^{1/4 - \epsilon})$ for any constant $\epsilon > 0$ and is the first nearly optimal algorithm for general graphs whose diameter is $\Omega(n^\delta)$ for any constant $\delta$.) <|cite_end|> <|cite_start|> (Reference: Nearly Work-Efficient Parallel Algorithm for Digraph Reachability: One of the simplest problems on directed graphs is that of identifying the set of vertices reachable from a designated source vertex. This problem can be solved easily sequentially by performing a graph search, but efficient parallel algorithms have eluded researchers for decades. For sparse high-diameter graphs in particular, there is no known work-efficient parallel algorithm with nontrivial parallelism. This amounts to one of the most fundamental open questions in parallel graph algorithms: Is there a parallel algorithm for digraph reachability with nearly linear work? This paper shows that the answer is yes. This paper presents a randomized parallel algorithm for digraph reachability and related problems with expected work $\tilde{O}(m)$ and span $\tilde{O}(n^{2/3})$, and hence parallelism $\tilde{\Omega}(n^{1/3})$, on any graph with $n$ vertices and $m$ arcs. This is the first parallel algorithm having both nearly linear work and strongly sublinear span. The algorithm can be extended to produce a directed spanning tree, determine whether the graph is acyclic, topologically sort the strongly connected components of the graph, or produce a directed ear decomposition of a strongly connected graph, all with similar work and span. The main technical contribution is an \emph{efficient} Monte Carlo algorithm that, through the addition of $\tilde{O}(n)$ shortcuts, reduces the diameter of the graph to $\tilde{O}(n^{2/3})$ with high probability. While both sequential and parallel algorithms are known with those combinatorial properties, even the sequential algorithms are not efficient. This paper presents a surprisingly simple sequential algorithm that achieves the stated diameter reduction and runs in $\tilde{O}(m)$ time. Parallelizing that algorithm yields the main result, but doing so involves overcoming several other challenges.) <|cite_end|> <|cite_start|> (Reference: Deterministic Decremental SSSP and Approximate Min-Cost Flow in Almost-Linear Time: In the decremental single-source shortest paths problem, the goal is to maintain distances from a fixed source $s$ to every vertex $v$ in an m-edge graph undergoing edge deletions. In this paper, we conclude a long line of research on this problem by showing a near-optimal deterministic data structure that maintains (1 + E) -approximate distance estimates and runs in m1+o(1)total update time. Our result, in particular, removes the oblivious adversary assumption required by the previous breakthrough result by Henzinger et al. [FOCS'14], which leads to our second result: the first almost-linear time algorithm for (1 - E) -approximate min-cost flow in undirected graphs where capacities and costs can be taken over edges and vertices. Previously, algorithms for max flow with vertex capacities, or min-cost flow with any capacities required super-linear time. Our result essentially completes the picture for approximate flow in undirected graphs. The key technique of the first result is a novel framework that allows us to treat low-diameter graphs like expanders. This allows us to harness expander properties while bypassing shortcomings of expander decomposition, which almost all previous expander-based algorithms needed to deal with. For the second result, we break the notorious flow-decomposition barrier from the multiplicative-weight-update framework using randomization.) <|cite_end|> <|cite_start|> (Reference: Near-Optimal Decremental SSSP in Dense Weighted Digraphs: In the decremental Single-Source Shortest Path problem (SSSP), we are given a weighted directed graph $G= (V, E, w)$ undergoing edge deletions and a source vertex $r\in V$; let $n=\vert V\vert, m=\vert E\vert$ and $W$ be the aspect ratio of the graph. The goal is to obtain a data structure that maintains shortest paths from $r$ to all vertices in $V$ and can answer distance queries in $O(1)$ time, as well as return the corresponding path $P$ in $O(\vert P\vert)$ time. This problem was first considered by Even and Shiloach [JACM'81], who provided an algorithm with total update time $O(mn)$ for unweighted undirected graphs; this was later extended to directed weighted graphs [FOCS'95, STOC'99]. There are conditional lower bounds showing that $O(mn)$ is in fact near-optimal [ESA'04, FOCS'14, STOC'15, STOC'20]. In a breakthrough result, Forster et al. showed that total update time $\min\{m^{7/6}n^{2/3+o(1)}, m^{3/4}n^{5/4+o(1)}\} \text{polylog}(W)= mn^{0.9+o(1)}\text{polylog} (W)$, is possible if the algorithm is allowed to return ($1 +\epsilon$)-approximate paths, instead of exact ones [STOC'14, ICALP'15]. No further progress was made until Probst Gutenberg and Wulff-Nilsen [SODA'20] provided a new approach for the problem, which yields total time $\tilde{O}(\min\{m^{2/3}n^{4/3}\log W, (mn)^{7/8}\log W\})= \tilde{O}(\min\{n^{8/3}\log W,\ mn^{3/4}\log W\})$. Our result builds on this recent approach, but overcomes its limitations by introducing a significantly more powerful abstraction, as well as a different core subroutine. Our new framework yields a decremental ($1+\epsilon$)-approximate SSSP data structure with total update time $\tilde{O}(n^{2} \log^{4}W/\epsilon)$. Our algorithm is thus near-optimal for dense graphs with polynomial edge-weights. Our framework can also be applied to sparse graphs to obtain total update time $\tilde{O}(mn^{2/3} \log^{3}W/\epsilon)$. Combined, these data structures dominate all previous results. Like all previous $o(mn)$ algorithms that can return a path (not just a distance estimate), our result is randomized and assumes an oblivious adversary. Our framework effectively allows us to reduce SSSP in general graphs to the same problem in directed acyclic graphs (DAGs). We believe that our framework has significant potential to influence future work on directed SSSP, both in the dynamic model and in others.) <|cite_end|> <|cite_start|> (Reference: Efficient Construction of Directed Hopsets and Parallel Approximate Shortest Paths: The approximate single-source shortest-path problem is as follows: given a graph with nonnegative edge weights and a designated source vertex $s$, return estimates of the distances from~$s$ to each other vertex such that the estimate falls between the true distance and $(1+\epsilon)$ times the distance. This paper provides the first nearly work-efficient parallel algorithm with sublinear span (also called depth) for the approximate shortest-path problem on \emph{directed} graphs. Specifically, for constant $\epsilon$ and polynomially-bounded edge weights, our algorithm has work $\tilde{O}(m)$ and span $n^{1/2+o(1)}$. Several algorithms were previously known for the case of \emph{undirected} graphs, but none of the techniques seem to translate to the directed setting. The main technical contribution is the first nearly linear-work algorithm for constructing hopsets on directed graphs. A $(\beta,\epsilon)$-hopset is a set of weighted edges (sometimes called shortcuts) which, when added to the graph, admit $\beta$-hop paths with weight no more than $(1+\epsilon)$ times the true shortest-path distances. There is a simple sequential algorithm that takes as input a directed graph and produces a linear-cardinality hopset with $\beta=O(\sqrt{n})$, but its running time is quite high---specifically $\tilde{O}(m\sqrt{n})$. Our algorithm is the first more efficient algorithm that produces a directed hopset with similar characteristics. Specifically, our sequential algorithm runs in $\tilde{O}(m)$ time and constructs a hopset with $\tilde{O}(n)$ edges and $\beta = n^{1/2+o(1)}$. A parallel version of the algorithm has work $\tilde{O}(m)$ and span $n^{1/2+o(1)}$.) <|cite_end|> <|cite_start|> (Reference: Brief Announcement: An Improved Distributed Approximate Single Source
Shortest Paths Algorithm: This brief announcement presents an algorithm for (1+ε) approximate single-source shortest paths for directed graphs with non-negative real edge weights in the CONGEST model that runs in Õ ((n^1/2 +D+n^2/5+o(1) D^2/5 )log W / ε^2) rounds, where W is the ratio between the largest and smallest non-zero edge weights.) <|cite_end|> <|cite_start|> (Reference: A Deterministic Parallel APSP Algorithm and its Applications: In this paper we show a deterministic parallel all-pairs shortest paths algorithm for real-weighted directed graphs. The algorithm has $\tilde{O}(nm+(n/d)^3)$ work and $\tilde{O}(d)$ depth for any depth parameter $d\in [1,n]$. To the best of our knowledge, such a trade-off has only been previously described for the real-weighted single-source shortest paths problem using randomization [Bringmann et al., ICALP'17]. Moreover, our result improves upon the parallelism of the state-of-the-art randomized parallel algorithm for computing transitive closure, which has $\tilde{O}(nm+n^3/d^2)$ work and $\tilde{O}(d)$ depth [Ullman and Yannakakis, SIAM J. Comput. '91]. Our APSP algorithm turns out to be a powerful tool for designing efficient planar graph algorithms in both parallel and sequential regimes. One notable ingredient of our parallel APSP algorithm is a simple deterministic $\tilde{O}(nm)$-work $\tilde{O}(d)$-depth procedure for computing $\tilde{O}(n/d)$-size hitting sets of shortest $d$-hop paths between all pairs of vertices of a real-weighted digraph. Such hitting sets have also been called $d$-hub sets. Hub sets have previously proved especially useful in designing parallel or dynamic shortest paths algorithms and are typically obtained via random sampling. Our procedure implies, for example, an $\tilde{O}(nm)$-time deterministic algorithm for finding a shortest negative cycle of a real-weighted digraph. Such a near-optimal bound for this problem has been so far only achieved using a randomized algorithm [Orlin et al., Discret. Appl. Math. '18].) <|cite_end|>. In all of these models, most state-of-the-art algorithms for computing shortest paths start by first computing a hopset $H$ and then computing shortest paths in $G \cup H$, taking advantage of the fact that these shortest paths are guaranteed to contain at most $\beta$ edges. Note that for this second step to be efficient, it is crucial that $\beta$ is small and that $H$ contain relatively few edges. This brings us back to the original question of what kind of trade-offs are possible between these parameters. This question is further subdivided into three subproblems, in increasing order of generalization: shortcut sets capture reachability but not distances, $(\beta,\eps)$ hopsets capture $(1+\eps)$-approximate distances, and $(\beta,0)$-hopsets capture exact distances.
\subsection{Previous work}
A folklore randomized construction for hopsets, attributed to Ullman and Yannakakis <|cite_start|> (Reference: High-Probability Parallel Transitive-Closure Algorithms: There is a straightforward algorithm for computing the transitive-closure of an n-node graph in $O(\log ^2 n)$ time on an EREW-PRAM, using $n^3 / \log n$ processors, or indeed with $M(n) / \log n$ processors if serial matrix multiplication in $M(n)$ time can be done. This algorithm is within a log factor of optimal in work (processor-time product), for solving the all-pairs transitive-closure problem for dense graphs. However, this algorithm is far from optimal when either (a) the graph is sparse, or (b) we want to solve the single-source transitive-closure problem. It would be ideal to have an $\mathcal{NC}$ algorithm for transitive-closure that took about e processors for the single-source problem on a graph with n nodes and $e \geqq n$ arcs, or about $en$ processors for the all-pairs problem on the same graph. While an algorithm that good cannot be offered, algorithms with the following performance can be offered. (1) For single-source, $\tilde{O}(n^\varepsilon )$ time with $\tilde O(en^{1 - 2\varepsil...) <|cite_end|>and refined by Berman et al. <|cite_start|> (Reference: Finding Sparser Directed Spanners: A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, a subgraph $H = (V,E_H)$ is a $k$-spanner of a graph $G=(V,E)$ if for every pair of vertices $u,v \in V$, the shortest path distance $dist_H(u,v)$ from $u$ to $v$ in $H$ is at most $k.dist_G(u,v)$. We focus on spanners of directed graphs and a related notion of transitive-closure spanners. The latter captures the idea that a spanner should have a small diameter but preserve the connectivity of the original graph. We study the computational problem of finding the sparsest $k$-spanner (resp., $k$-TC-spanner) of a given directed graph, which we refer to as DIRECTED $k$-SPANNER (resp., $k$-TC-SPANNER). We improve all known approximation algorithms for these problems for $k\geq 3$. (For $k=2$, the current ratios are tight, assuming P$\neq$NP.) Along the way, we prove several structural results about the size of the sparsest spanners of directed graphs.) <|cite_end|>, is to randomly sample a set of vertices $S \subseteq V$, and then add an edge of weight $\dist(u,v)$ for all pairs $u,v \in S$ such that $u$ reaches $v$. This yields a $(\beta, 0)$ hopset with $\otil(n^2/\beta^2)$ edges. In particular, there is a $(\sqrt{n},0)$-hopset with $\otil(n)$ edges. Surprisingly, this simple construction still achieves the best known trade-off for $(\beta, 0)$ hopsets. Existing work on the problem thus focuses on the simpler problems of shortcut sets and $(\beta, \eps)$-hopsets.
In \emph{undirected} graphs both of these problems admit sparse hopsets with small $\beta$. Shortcut sets are trivial in undirected graphs, as one can simply add a star to each connected component. For $(\beta,\eps)$ hopsets, a series of different constructions <|cite_start|> (Reference: A Randomized Parallel Algorithm for Single-Source Shortest Paths: We give a randomized parallel algorithm for computing single-source shortest paths in weighted digraphs. We show that the exact shortest-path problem can be efficiently reduced to solving a series of approximate shortest-path subproblems. Our algorithm for the approximate shortest-path problem is based on the technique used by Ullman and Yannakakis in a parallel algorithm for breadth-first search.) <|cite_end|> <|cite_start|> (Reference: Time-Work Tradeoffs of the Single-Source Shortest Paths Problem: We give parallel algorithms that solve the single-source shortest paths problem on a weighted, undirected graph withnvertices andmedges inO(tlgn) time andO((n3/t2)lgnlg(n/t)+mlgn) work, or inO(tlgn) time andO((n3/t3+mn/t)lgn) work for anytin the range lgn?t?n. These algorithms run on the EREW PRAM model. They are the first strongly polynomial exact algorithms that run ino(n) time while doingo(n3) work.) <|cite_end|> <|cite_start|> (Reference: Polylog-time and near-linear work approximation scheme for undirected
shortest paths: Shortest paths computations constitute one of the most fundamental network problems. Nonetheless, known parallel shortest-paths algorithms are generally inefficient: they perform significantly more work (product of time and processors) than their sequential counterparts. This gap, known in the literature as the “transitive closure bottleneck,” poses a long-standing open problem. Our main result is an <inline-equation> <f> O<fen lp="par">mn<sup><g>e</g><inf>0</inf></sup>+s<fen lp="par"> m+n<sup>1+<g>e</g><inf>0</inf></sup><rp post="par"></fen><rp post="par"></fen> </f> </inline-equation> work polylog-time randomized algorithm that computes paths within (1 + <italic>O</italic>(1/polylog <italic>n</italic>) of shortest from <italic>s</italic> source nodes to all other nodes in weighted undirected networks with <italic>n</italic> nodes and <italic>m</italic> edges (for any fixed ε<subscrpt>0</subscrpt>>0). This work bound nearly matches the <inline-equation> <f> <a><ac>O</ac><ac>&d5;</ac></a><fen lp="par">sm<rp post="par"></fen> </f> </inline-equation> sequential time. In contrast, previous polylog-time algorithms required nearly <inline-equation> <f> <rf>min</rf><fen lp="cub"><a><ac>O</ac><ac>&d5;</ac></a><fen lp="par"> n<sup>3</sup><rp post="par"></fen>,<a><ac>O</ac><ac>&d5;</ac></a> <fen lp="par">m<sup>2</sup><rp post="par"></fen><rp post="cub"></fen> </f> </inline-equation> work (even when <italic>s</italic>=1), and previous near-linear work algorithms required near-<italic>O</italic>(<italic>n</italic>) time. We also present faster sequential algorithms that provide good approximate distances only between “distant” vertices: We obtain an <inline-equation> <f> O<fen lp="par"><fen lp="par">m+sn<rp post="par"></fen>n<sup><g>e</g><inf> 0</inf></sup><rp post="par"></fen></f> </inline-equation> time algorithm that computes paths of weight (1+<italic>O</italic>(1/polylog <italic>n</italic>) dist + <italic>O</italic>(<italic>w</italic><subscrpt>max</subscrpt> polylog <italic>n</italic>), where dist is the corresponding distance and <italic>w</italic><subscrpt>max</subscrpt> is the maximum edge weight. Our chief instrument, which is of independent interest, are efficient constructions of sparse <italic>hop sets</italic>. A (<italic>d</italic>,ε)-hop set of a network <italic>G</italic>=(<italic>V,E</italic>) is a set <italic>E</italic>* of new weighted edges such that mimimum-weight <italic>d</italic>-edge paths in <inline-equation> <f> <fen lp="par">V,E∪E<sup>*</sup><rp post="par"></fen></f> </inline-equation> have weight within (1+ε) of the respective distances in <italic>G</italic>. We construct hop sets of size <inline-equation> <f> O<fen lp="par">n<sup>1+<g>e</g><inf>0</inf></sup><rp post="par"></fen> </f> </inline-equation> where ε=<italic>O</italic>(1/polylog <italic>n</italic>) and <italic>d</italic>=<italic>O</italic>(polylog <italic>n</italic>).) <|cite_end|>culminated in the papers of Huang and Pettie <|cite_start|> (Reference: Thorup-Zwick Emulators are Universally Optimal Hopsets: A $(\beta,\epsilon)$-$\textit{hopset}$ is, informally, a weighted edge set that, when added to a graph, allows one to get from point $a$ to point $b$ using a path with at most $\beta$ edges ("hops") and length $(1+\epsilon)\mathrm{dist}(a,b)$. In this paper we observe that Thorup and Zwick's $\textit{sublinear additive}$ emulators are also actually $(O(k/\epsilon)^k,\epsilon)$-hopsets for every $\epsilon>0$, and that with a small change to the Thorup-Zwick construction, the size of the hopset can be made $O(n^{1+\frac{1}{2^{k+1}-1}})$. As corollaries, we also shave "$k$" factors off the size of Thorup and Zwick's sublinear additive emulators and the sparsest known $(1+\epsilon,O(k/\epsilon)^{k-1})$-spanners, due to Abboud, Bodwin, and Pettie.) <|cite_end|>and Elkin and Neiman <|cite_start|> (Reference: Linear-Size Hopsets with Small Hopbound, and Constant-Hopbound Hopsets in RNC: Hopsets are a fundamental graph-theoretic and graph-algorithmic construct, and they are widely used for distance-related problems in a variety of computational settings. Currently existing constructions of hopsets produce hopsets either with Ω(nlogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Omega (n \log n)$$\end{document} edges, or with a hopbound nΩ(1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^{\Omega (1)}$$\end{document}. In this paper we devise a construction of linear-size hopsets with hopbound (ignoring the dependence on ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon $$\end{document}) (loglogn)loglogn+O(1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\log \log n)^{\log \log n + O(1)}$$\end{document}. This improves the previous hopbound for linear-size hopsets almost exponentially. We also devise efficient implementations of our construction in PRAM and distributed settings. The only existing PRAM algorithm [19] for computing hopsets with a constant (i.e., independent of n) hopbound requires nΩ(1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^{\Omega (1)}$$\end{document} time. We devise a PRAM algorithm with polylogarithmic running time for computing hopsets with a constant hopbound, i.e., our running time is exponentially better than the previous one. Moreover, these hopsets are also significantly sparser than their counterparts from [19]. We apply these hopsets to achieve the following online variant of shortest paths in the PRAM model: preprocess a given weighted graph within polylogarithmic time, and then given any query vertex v, report all approximate shortest paths from v in constant time. All previous constructions of hopsets require either polylogarithmic time per query or polynomial preprocessing time.) <|cite_end|>, which showed a $(n^{o(1)}, \eps)$ hopset with $O(n)$ edges and a $(O(1),\eps)$ hopset with $n^{1+o(1)}$ edges. where $\eps$ is any fixed constant. Note that these are clearly optimal up to $n^{o(1)}$ factors, and moreover a lower bound of Abboud, Bodwin, and Pettie <|cite_start|> (Reference: A Hierarchy of Lower Bounds for Sublinear Additive Spanners: Spanners, emulators, and approximate distance oracles can be viewed as lossy compression schemes that represent an unweighted graph metric in small space, say $\tilde{O}(n^{1+\delta})$ bits. There is an inherent tradeoff between the sparsity parameter $\delta$ and the stretch function $f$ of the compression scheme, but the qualitative nature of this tradeoff has remained a persistent open problem. In this paper we show that the recent additive spanner lower bound of Abboud and Bodwin is just the first step in a hierarchy of lower bounds that fully characterize the asymptotic behavior of the optimal stretch function $f$ as a function of $\delta \in (0,1/3)$. Specifically, for any integer $k\ge 2$, any compression scheme with size $O(n^{1+\frac{1}{2^k-1} - \epsilon})$ has a sublinear additive stretch function $f$: $$f(d) = d + \Omega(d^{1-\frac{1}{k}}).$$ This lower bound matches Thorup and Zwick's (2006) construction of sublinear additive emulators. It also shows that Elkin and Peleg's $(1+\epsilon,\beta)$-spanners have an essentially optimal tradeoff between $\delta,\epsilon,$ and $\beta$, and that the sublinear additive spanners of Pettie (2009) and Chechik (2013) are not too far from optimal. To complement these lower bounds we present a new construction of $(1+\epsilon, O(k/\epsilon)^{k-1})$-spanners with size $O((k/\epsilon)^{h_k} kn^{1+\frac{1}{2^{k+1}-1}})$, where $h_k < 3/4$. This size bound improves on the spanners of Elkin and Peleg (2004), Thorup and Zwick (2006), and Pettie (2009). According to our lower bounds neither the size nor stretch function can be substantially improved.) <|cite_end|>showed that a $n^{o(1)}$ factor is necessary. See also <|cite_start|> (Reference: A Unified Framework for Hopsets and Spanners: Given an undirected graph $G=(V,E)$, an {\em $(\alpha,\beta)$-spanner} $H=(V,E')$ is a subgraph that approximately preserves distances; for every $u,v\in V$, $d_H(u,v)\le \alpha\cdot d_G(u,v)+\beta$. An $(\alpha,\beta)$-hopset is a graph $H=(V,E")$, so that adding its edges to $G$ guarantees every pair has an $\alpha$-approximate shortest path that has at most $\beta$ edges (hops), that is, $d_G(u,v)\le d_{G\cup H}^{(\beta)}(u,v)\le \alpha\cdot d_G(u,v)$. Given the usefulness of spanners and hopsets for fundamental algorithmic tasks, several different algorithms and techniques were developed for their construction, for various regimes of the stretch parameter $\alpha$. In this work we develop a single algorithm that can attain all state-of-the-art spanners and hopsets for general graphs, by choosing the appropriate input parameters. In fact, in some cases it also improves upon the previous best results. We also show a lower bound on our algorithm. In \cite{BP20}, given a parameter $k$, a $(O(k^{\epsilon}),O(k^{1-\epsilon}))$-hopset of size $\tilde{O}(n^{1+1/k})$ was shown for any $n$-vertex graph and parameter $0<\epsilon<1$, and they asked whether this result is best possible. We resolve this open problem, showing that any $(\alpha,\beta)$-hopset of size $O(n^{1+1/k})$ must have $\alpha\cdot \beta\ge\Omega(k)$.) <|cite_end|>for follow-up work that refined the $n^{o(1)}$ factor.
The focus of our paper is on \textit{directed} graphs, where such hopset tradeoffs are provably impossible. Improving upon a previous lower bound of Hesse <|cite_start|> (Reference: Directed graphs requiring large numbers of shortcuts: A conjecture by Thorup is that the diameter of a directed graph with <i>n</i> vertices and <i>m</i> edges can be reduced to (log <i>n</i>)<sup><i>O</i>(1)</sup> by adding <i>O</i>(<i>m</i>) edges [3]. We give a counterexample to this conjecture. We construct a graph <i>G</i> requiring the addition of Ω(<i>mn</i> <sup>1/17</sup>) edges to reduce its diameter below Θ(<i>n</i><sup>1/17</sup>). By extending the construction to higher dimensions, we construct graphs with <i>n</i><sup>1+ε</sup> edges that require the addition of Ω(<i>n</i><sup>2--ε</sup>) edges to reduce their diameter. These constructions yield time-space tradeoffs in lower bounds for transitive closure queries in a certain computational model.) <|cite_end|>, Huang and Pettie <|cite_start|> (Reference: Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts: We prove better lower bounds on additive spanners and emulators, which are lossy compression schemes for undirected graphs, as well as lower bounds on shortcut sets, which reduce the diameter of directed graphs. We show that any $O(n)$-size shortcut set cannot bring the diameter below $\Omega(n^{1/6})$, and that any $O(m)$-size shortcut set cannot bring it below $\Omega(n^{1/11})$. These improve Hesse's [Hesse03] lower bound of $\Omega(n^{1/17})$. By combining these constructions with Abboud and Bodwin's [AbboudB17] edge-splitting technique, we get additive stretch lower bounds of $+\Omega(n^{1/11})$ for $O(n)$-size spanners and $+\Omega(n^{1/18})$ for $O(n)$-size emulators. These improve Abboud and Bodwin's $+\Omega(n^{1/22})$ lower bounds.) <|cite_end|>showed that there exist graphs such that that any $O(n)$-size shortcut set cannot reduce $\beta$ to below $\Omega(n^{1/6}$); they also show that any $O(m)$-size shortcut set cannot reduce $\beta$ to below $\Omega(n^{1/11})$, and follow up work by Lu, Williams, Wein, and Xu improves this to $\Omega(n^{1/8})$ <|cite_start|> (Reference: Better Lower Bounds for Shortcut Sets and Additive Spanners via an Improved Alternation Product: We obtain improved lower bounds for additive spanners, additive emulators, and diameter-reducing shortcut sets. Spanners and emulators are sparse graphs that approximately preserve the distances of a given graph. A shortcut set is a set of edges that when added to a directed graph, decreases its diameter. The previous best known lower bounds for these three structures are given by Huang and Pettie [SWAT 2018]. For $O(n)$-sized spanners, we improve the lower bound on the additive stretch from $\Omega(n^{1/11})$ to $\Omega(n^{2/21})$. For $O(n)$-sized emulators, we improve the lower bound on the additive stretch from $\Omega(n^{1/18})$ to $\Omega(n^{1/16})$. For $O(m)$-sized shortcut sets, we improve the lower bound on the graph diameter from $\Omega(n^{1/11})$ to $\Omega(n^{1/8})$. Our key technical contribution, which is the basis of all of our bounds, is an improvement of a graph product known as an alternation product.) <|cite_end|>. In particular, these lower bounds show a polynomial separation between hopsets in directed and undirected graphs. Moreover, all of these lower bounds apply even to the simpler problem of shortcut sets.
Until extremely recently, the best known upper bound in directed graphs was the folklore algorithm mentioned above. This left a large gap between the best-known upper and lower bounds: focusing on the standard case of a shortcut set $H$ with $O(n)$ edges, the best known upper bound achieved $\beta = \otil(n^{1/2})$, while the best known lower bound was $\beta = \Omega(n^{1/6})$. Very recently, in a major breakthrough, Kogan and Parter presented the first hopset to go beyond the folklore construction <|cite_start|> (Reference: New Diameter-Reducing Shortcuts and Directed Hopsets: Breaking the √n Barrier: For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open. We provide the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. This narrows the gap w.r.t the current diameter lower bound of $\Omega(n^{1/6})$ by [Huang and Pettie, SWAT'18]. Moreover, we show that a diameter of $\widetilde{O}(n^{1/2})$ can in fact be achieved with a \emph{sublinear} number of $O(n^{3/4})$ shortcut edges. Formally, letting $S(n,D)$ be the bound on the size of the shortcut set required in order to reduce the diameter of any $n$-vertex digraph to at most $D$, our algorithms yield: \[ S(n,D)=\begin{cases} \widetilde{O}(n^2/D^3),&\text{for~} D\leq n^{1/3},\\ \widetilde{O}((n/D)^{3/2}),&\text{for~} D>n^{1/3}~. \end{cases} \] We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.) <|cite_end|>. They presented a smooth trade-off between $\beta$ and the size of the hopset, but again focusing on the standard case of a set $H$ with $O(n)$ edges, they showed that every graph contains a $\otil(n^{1/3})$-shortcut set and an $(\otil(n^{2/5}),\eps)$ hopset for any fixed $\eps$.
The result of Kogan and Parter shows the possibility of going beyond $\beta = \sqrt{n}$ for both shortcut sets and $(1+\eps)$-approximate hopsets, but it also leaves a polynomial gap between these two settings (i.e. $n^{1/3}$ vs. $n^{2/5})$. Our contribution in this paper is to close this gap.
\subsection{Our Contribution}
\begin{theorem}\label{thm:main}
For any directed graph with integer edge weights in $[1,W]$, given $\eps \in (0,1)$ and $\beta\geq 20\log n$, there is a $(\beta,\eps)$-hopset $H$ of size
\[|H|=
\begin{dcases}
O\Big(\frac{n^2 \cdot\log^7 n \log^2(nW)}{\eps^2 \beta^3}\Big) & \text{for } \beta\leq n^{1/3},\\[1em]
O\Big(\frac{n^{3/2} \cdot\log^7n \log^2(nW)}{\eps^2 \beta^{3/2}}\Big) & \text{for } \beta> n^{1/3}.
\end{dcases}\]
\end{theorem}
Our result improves polynomially upon the previous-best tradeoff for $(\beta,\eps)$ hopsets of Kogan and Parter (see Theorem 1.5 in <|cite_start|> (Reference: New Diameter-Reducing Shortcuts and Directed Hopsets: Breaking the √n Barrier: For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open. We provide the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. This narrows the gap w.r.t the current diameter lower bound of $\Omega(n^{1/6})$ by [Huang and Pettie, SWAT'18]. Moreover, we show that a diameter of $\widetilde{O}(n^{1/2})$ can in fact be achieved with a \emph{sublinear} number of $O(n^{3/4})$ shortcut edges. Formally, letting $S(n,D)$ be the bound on the size of the shortcut set required in order to reduce the diameter of any $n$-vertex digraph to at most $D$, our algorithms yield: \[ S(n,D)=\begin{cases} \widetilde{O}(n^2/D^3),&\text{for~} D\leq n^{1/3},\\ \widetilde{O}((n/D)^{3/2}),&\text{for~} D>n^{1/3}~. \end{cases} \] We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.) <|cite_end|>). Most notably, for a hopset $H$ with $O(n)$ edges, and any fixed $\eps$, Kogan and Parter <|cite_start|> (Reference: New Diameter-Reducing Shortcuts and Directed Hopsets: Breaking the √n Barrier: For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open. We provide the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. This narrows the gap w.r.t the current diameter lower bound of $\Omega(n^{1/6})$ by [Huang and Pettie, SWAT'18]. Moreover, we show that a diameter of $\widetilde{O}(n^{1/2})$ can in fact be achieved with a \emph{sublinear} number of $O(n^{3/4})$ shortcut edges. Formally, letting $S(n,D)$ be the bound on the size of the shortcut set required in order to reduce the diameter of any $n$-vertex digraph to at most $D$, our algorithms yield: \[ S(n,D)=\begin{cases} \widetilde{O}(n^2/D^3),&\text{for~} D\leq n^{1/3},\\ \widetilde{O}((n/D)^{3/2}),&\text{for~} D>n^{1/3}~. \end{cases} \] We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.) <|cite_end|>construct a $(\otil(n^{2/5}), \eps)$ hopset, whereas we construct a $(\otil(n^{1/3}),\eps)$-hopset. Moreover, up to logarithmic factors, our tradeoff exactly matches that of Kogan and Parter for the simpler problem of $\beta$-shortcut sets. We thus close the gap between $(1+\eps)$-approximate hopsets and shortcut sets.
\paragraph{Construction Time}
The focus of this paper is on existential claims about hopset trade-offs, so we make no attempt to optimize the time required to construct the hopset. Nonetheless it is easy to check that the construction in this paper can be executed in polynomial time. On the other hand, a clear bottleneck in our construction is that it requires computing the transitive closure, so even if the algorithm is refined, our approach necessarily requires a runtime of $\Omega(mn)$.
The hopset of Kogan and Parter <|cite_start|> (Reference: New Diameter-Reducing Shortcuts and Directed Hopsets: Breaking the √n Barrier: For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open. We provide the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. This narrows the gap w.r.t the current diameter lower bound of $\Omega(n^{1/6})$ by [Huang and Pettie, SWAT'18]. Moreover, we show that a diameter of $\widetilde{O}(n^{1/2})$ can in fact be achieved with a \emph{sublinear} number of $O(n^{3/4})$ shortcut edges. Formally, letting $S(n,D)$ be the bound on the size of the shortcut set required in order to reduce the diameter of any $n$-vertex digraph to at most $D$, our algorithms yield: \[ S(n,D)=\begin{cases} \widetilde{O}(n^2/D^3),&\text{for~} D\leq n^{1/3},\\ \widetilde{O}((n/D)^{3/2}),&\text{for~} D>n^{1/3}~. \end{cases} \] We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.) <|cite_end|>also requires $\Omega(mn)$ construction time, but they were later able to achieve significantly faster construction time of $\tilde{O}(mn^{1/3} + n^{1.5})$ for their linear-size $\otil(n^{1/3})$- \emph{shortcut} set <|cite_start|> (Reference: Beating Matrix Multiplication for n 1 / 3 -Directed Shortcuts: For an n -vertex digraph G = ( V, E ) and integer parameter D , a D - shortcut is a small set H of directed edges taken from the transitive closure of G , satisfying that the diameter of G ∪ H is at most D . A recent work [Kogan and Parter, SODA 2022] presented shortcutting algorithms with improved diameter vs. size tradeoffs. Most notably, obtaining linear size D -shortcuts for D = e O ( n 1 / 3 ), breaking the √ n -diameter barrier. These algorithms run in O ( n ω ) time, as they are based on the computation of the transitive closure of the graph. We present a new algorithmic approach for D -shortcuts, that matches the bounds of [Kogan and Parter, SODA 2022], while running in o ( n ω ) time for every D ≥ n 1 / 3 . Our approach is based on a reduction to the min-cost max-flow problem, which can be solved in e O ( m + n 3 / 2 ) time due to the recent breakthrough result of [Brand et al., STOC 2021]. We also demonstrate the applicability of our techniques to computing the minimal chain covers and dipath decompositions for directed acyclic graphs. For an n -vertex m -edge digraph G = ( V, E ), our key results are: bounds by [Caceres et al., SODA 2022] and [Grandoni et al., SODA 2021]. Our results also provide a new connection between shortcutting sets and the seemingly less related problems of minimum chain covers and the maximum antichains in DAGs.) <|cite_end|>.
Achieving similarly fast construction times for the more general problem of hopsets remains an intriguing open problem.
\paragraph{Application to Distance Preservers}
In addition to their critical role in many shortest path algorithms (see above), one of the main motivations for studying hopsets is their close connection to other fundamental problems in extremal graph theory, such as spanners, emulators, and distance preservers. At first glance these problems appear somewhat different because hopsets are trying to \textit{augment} the graph with new edges, while spanners and distance preservers are trying to \textit{sparsify} the graph by removing edges; but in very intriguing recent work, Kogan and Parter prove a strong connection between these problems by showing a black-box conversion from construction of shortcut sets / hopsets to construction of reachability/distance preservers (Theorem 1.1 in <|cite_start|> (Reference: Having Hope in Hops: New Spanners, Preservers and Lower Bounds for Hopsets: Hopsets and spanners are fundamental graph structures, playing a key role in shortest path computation, distributed communication, and more. A (near-exact) hopset for a given graph $G$ is a (small) subset of weighted edges $H$ that when added to the graph $G$ reduces the number of hops (edges) of near-exact shortest paths. Spanners and distance preservers, on the other hand, ask for removing many edges from the graph while approximately preserving shortest path distances. We provide a general reduction scheme from graph hopsets to the known metric compression schemes of spanners, emulators and distance preservers. Consequently, we get new and improved upper bound constructions for the latter, as well as, new lower bound results for hopsets. Our work makes a significant progress on the tantalizing open problem concerning the formal connection between hopsets and spanners, e.g., as posed by Elkin and Neiman [Bull. EATCS 2020].) <|cite_end|>).
In particular, using this new black box from <|cite_start|> (Reference: Having Hope in Hops: New Spanners, Preservers and Lower Bounds for Hopsets: Hopsets and spanners are fundamental graph structures, playing a key role in shortest path computation, distributed communication, and more. A (near-exact) hopset for a given graph $G$ is a (small) subset of weighted edges $H$ that when added to the graph $G$ reduces the number of hops (edges) of near-exact shortest paths. Spanners and distance preservers, on the other hand, ask for removing many edges from the graph while approximately preserving shortest path distances. We provide a general reduction scheme from graph hopsets to the known metric compression schemes of spanners, emulators and distance preservers. Consequently, we get new and improved upper bound constructions for the latter, as well as, new lower bound results for hopsets. Our work makes a significant progress on the tantalizing open problem concerning the formal connection between hopsets and spanners, e.g., as posed by Elkin and Neiman [Bull. EATCS 2020].) <|cite_end|>, the $(\beta,\eps)$-hopset of Kogan and Parter from <|cite_start|> (Reference: New Diameter-Reducing Shortcuts and Directed Hopsets: Breaking the √n Barrier: For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open. We provide the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. This narrows the gap w.r.t the current diameter lower bound of $\Omega(n^{1/6})$ by [Huang and Pettie, SWAT'18]. Moreover, we show that a diameter of $\widetilde{O}(n^{1/2})$ can in fact be achieved with a \emph{sublinear} number of $O(n^{3/4})$ shortcut edges. Formally, letting $S(n,D)$ be the bound on the size of the shortcut set required in order to reduce the diameter of any $n$-vertex digraph to at most $D$, our algorithms yield: \[ S(n,D)=\begin{cases} \widetilde{O}(n^2/D^3),&\text{for~} D\leq n^{1/3},\\ \widetilde{O}((n/D)^{3/2}),&\text{for~} D>n^{1/3}~. \end{cases} \] We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.) <|cite_end|>implies a $(1+\eps)$-approximate distance preserver with $\otil(np^{2/5} + n^{2/3}p^{2/3})$ edges, where $p = |P|$ is the number of pairs. Applying the same black box from <|cite_start|> (Reference: Having Hope in Hops: New Spanners, Preservers and Lower Bounds for Hopsets: Hopsets and spanners are fundamental graph structures, playing a key role in shortest path computation, distributed communication, and more. A (near-exact) hopset for a given graph $G$ is a (small) subset of weighted edges $H$ that when added to the graph $G$ reduces the number of hops (edges) of near-exact shortest paths. Spanners and distance preservers, on the other hand, ask for removing many edges from the graph while approximately preserving shortest path distances. We provide a general reduction scheme from graph hopsets to the known metric compression schemes of spanners, emulators and distance preservers. Consequently, we get new and improved upper bound constructions for the latter, as well as, new lower bound results for hopsets. Our work makes a significant progress on the tantalizing open problem concerning the formal connection between hopsets and spanners, e.g., as posed by Elkin and Neiman [Bull. EATCS 2020].) <|cite_end|>, our improved $(\beta,\eps)$-hopset immediately implies a $(1+\eps)$-approximate distance preserver with $\otil(np^{1/3} + n^{2/3}p^{2/3})$ edges. For $p \geq n$, our bound matches (up to log factors) the state-of-the-art sparsity of $O(n + n^{2/3}p^{2/3})$ of Abboud and Bodwin for the simpler problem of reachability preservers <|cite_start|> (Reference: A Hierarchy of Lower Bounds for Sublinear Additive Spanners: Spanners, emulators, and approximate distance oracles can be viewed as lossy compression schemes that represent an unweighted graph metric in small space, say $\tilde{O}(n^{1+\delta})$ bits. There is an inherent tradeoff between the sparsity parameter $\delta$ and the stretch function $f$ of the compression scheme, but the qualitative nature of this tradeoff has remained a persistent open problem. In this paper we show that the recent additive spanner lower bound of Abboud and Bodwin is just the first step in a hierarchy of lower bounds that fully characterize the asymptotic behavior of the optimal stretch function $f$ as a function of $\delta \in (0,1/3)$. Specifically, for any integer $k\ge 2$, any compression scheme with size $O(n^{1+\frac{1}{2^k-1} - \epsilon})$ has a sublinear additive stretch function $f$: $$f(d) = d + \Omega(d^{1-\frac{1}{k}}).$$ This lower bound matches Thorup and Zwick's (2006) construction of sublinear additive emulators. It also shows that Elkin and Peleg's $(1+\epsilon,\beta)$-spanners have an essentially optimal tradeoff between $\delta,\epsilon,$ and $\beta$, and that the sublinear additive spanners of Pettie (2009) and Chechik (2013) are not too far from optimal. To complement these lower bounds we present a new construction of $(1+\epsilon, O(k/\epsilon)^{k-1})$-spanners with size $O((k/\epsilon)^{h_k} kn^{1+\frac{1}{2^{k+1}-1}})$, where $h_k < 3/4$. This size bound improves on the spanners of Elkin and Peleg (2004), Thorup and Zwick (2006), and Pettie (2009). According to our lower bounds neither the size nor stretch function can be substantially improved.) <|cite_end|> | [
"<|reference_start|> Sublinear-Time Decremental Algorithms for Single-Source Reachability and Shortest Paths on Directed Graphs: We consider dynamic algorithms for maintaining Single-Source Reachability (SSR) and approximate Single-Source Shortest Paths (SSSP) on $n$-node $m$-edge directed graphs under edge deletions (decremental algorithms). The previous fastest algorithm for SSR and SSSP goes back three decades to Even and Shiloach [JACM 1981]; it has $ O(1) $ query time and $ O (mn) $ total update time (i.e., linear amortized update time if all edges are deleted). This algorithm serves as a building block for several other dynamic algorithms. The question whether its total update time can be improved is a major, long standing, open problem. In this paper, we answer this question affirmatively. We obtain a randomized algorithm with an expected total update time of $ O(\\min (m^{7/6} n^{2/3 + o(1)}, m^{3/4} n^{5/4 + o(1)}) ) = O (m n^{9/10 + o(1)}) $ for SSR and $(1+\\epsilon)$-approximate SSSP if the edge weights are integers from $ 1 $ to $ W \\leq 2^{\\log^c{n}} $ and $ \\epsilon \\geq 1 / \\log^c{n} $ for some constant $ c $. We also extend our algorithm to achieve roughly the same running time for Strongly Connected Components (SCC), improving the algorithm of Roditty and Zwick [FOCS 2002]. Our algorithm is most efficient for sparse and dense graphs. When $ m = \\Theta(n) $ its running time is $ O (n^{1 + 5/6 + o(1)}) $ and when $ m = \\Theta(n^2) $ its running time is $ O (n^{2 + 3/4 + o(1)}) $. For SSR we also obtain an algorithm that is faster for dense graphs and has a total update time of $ O ( m^{2/3} n^{4/3 + o(1)} + m^{3/7} n^{12/7 + o(1)}) $ which is $ O (n^{2 + 2/3}) $ when $ m = \\Theta(n^2) $. All our algorithms have constant query time in the worst case and are correct with high probability against an oblivious adversary. <|reference_end|>",
"<|reference_start|> Parallel Reachability in Almost Linear Work and Square Root Depth: In this paper we provide a parallel algorithm that given any $n$-node $m$-edge directed graph and source vertex $s$ computes all vertices reachable from $s$ with $\\tilde{O}(m)$ work and $n^{1/2 + o(1)}$ depth with high probability in $n$ . This algorithm also computes a set of $\\tilde{O}(n)$ edges which when added to the graph preserves reachability and ensures that the diameter of the resulting graph is at most $n^{1/2 + o(1)}$. Our result improves upon the previous best known almost linear work reachability algorithm due to Fineman which had depth $\\tilde{O}(n^{2/3})$. Further, we show how to leverage this algorithm to achieve improved distributed algorithms for single source reachability in the CONGEST model. In particular, we provide a distributed algorithm that given a $n$-node digraph of undirected hop-diameter $D$ solves the single source reachability problem with $\\tilde{O}(n^{1/2} + n^{1/3 + o(1)} D^{2/3})$ rounds of the communication in the CONGEST model with high probability in $n$. Our algorithm is nearly optimal whenever $D = O(n^{1/4 - \\epsilon})$ for any constant $\\epsilon > 0$ and is the first nearly optimal algorithm for general graphs whose diameter is $\\Omega(n^\\delta)$ for any constant $\\delta$. <|reference_end|>",
"<|reference_start|> Better Lower Bounds for Shortcut Sets and Additive Spanners via an Improved Alternation Product: We obtain improved lower bounds for additive spanners, additive emulators, and diameter-reducing shortcut sets. Spanners and emulators are sparse graphs that approximately preserve the distances of a given graph. A shortcut set is a set of edges that when added to a directed graph, decreases its diameter. The previous best known lower bounds for these three structures are given by Huang and Pettie [SWAT 2018]. For $O(n)$-sized spanners, we improve the lower bound on the additive stretch from $\\Omega(n^{1/11})$ to $\\Omega(n^{2/21})$. For $O(n)$-sized emulators, we improve the lower bound on the additive stretch from $\\Omega(n^{1/18})$ to $\\Omega(n^{1/16})$. For $O(m)$-sized shortcut sets, we improve the lower bound on the graph diameter from $\\Omega(n^{1/11})$ to $\\Omega(n^{1/8})$. Our key technical contribution, which is the basis of all of our bounds, is an improvement of a graph product known as an alternation product. <|reference_end|>",
"<|reference_start|> Having Hope in Hops: New Spanners, Preservers and Lower Bounds for Hopsets: Hopsets and spanners are fundamental graph structures, playing a key role in shortest path computation, distributed communication, and more. A (near-exact) hopset for a given graph $G$ is a (small) subset of weighted edges $H$ that when added to the graph $G$ reduces the number of hops (edges) of near-exact shortest paths. Spanners and distance preservers, on the other hand, ask for removing many edges from the graph while approximately preserving shortest path distances. We provide a general reduction scheme from graph hopsets to the known metric compression schemes of spanners, emulators and distance preservers. Consequently, we get new and improved upper bound constructions for the latter, as well as, new lower bound results for hopsets. Our work makes a significant progress on the tantalizing open problem concerning the formal connection between hopsets and spanners, e.g., as posed by Elkin and Neiman [Bull. EATCS 2020]. <|reference_end|>"
] | [
2,
5,
23,
30
] | {"<|cite_1|>": "ss-782311", "<|multi_cite_2_1|>": "ss-782312", "<|multi_cite_2_2|>": "arxiv-76931", "<|multi_cite_2_3|>": "arxiv-112375", "<|multi_cite_2_4|>": "arxiv-139124", "<|multi_cite_2_5|>": "arxiv-205212", "<|multi_cite_2_6|>": "arxiv-139217", "<|multi_cite_2_7|>": "ss-1960120", "<|multi_cite_2_8|>": "ss-782313", "<|multi_cite_2_9|>": "arxiv-238923", "<|multi_cite_2_10|>": "ss-782314", "<|multi_cite_2_11|>": "ss-782315", "<|cite_3|>": "ss-808161", "<|cite_4|>": "ss-2182483", "<|multi_cite_5_1|>": "ss-782312", "<|multi_cite_5_2|>": "ss-782316", "<|multi_cite_5_3|>": "ss-782317", "<|cite_6|>": "arxiv-122929", "<|cite_7|>": "ss-1220688", "<|cite_8|>": "arxiv-102775", "<|multi_cite_9_2|>": "arxiv-362259", "<|cite_10|>": "ss-2082212", "<|cite_11|>": "arxiv-148726", "<|cite_12|>": "arxiv-377856", "<|cite_13|>": "ss-746958", "<|cite_14|>": "ss-746958", "<|cite_15|>": "ss-746958", "<|cite_16|>": "ss-746958", "<|cite_17|>": "ss-782318", "<|cite_18|>": "arxiv-461706", "<|cite_19|>": "arxiv-461706", "<|cite_20|>": "ss-746958", "<|cite_21|>": "arxiv-461706", "<|cite_22|>": "arxiv-102775"} |
2310.12955-1 | <|cite_start|> (Reference: Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets: We study episodic two-player zero-sum Markov games (MGs) in the offline setting, where the goal is to find an approximate Nash equilibrium (NE) policy pair based on a dataset collected a priori. When the dataset does not have uniform coverage over all policy pairs, finding an approximate NE involves challenges in three aspects: (i) distributional shift between the behavior policy and the optimal policy, (ii) function approximation to handle large state space, and (iii) minimax optimization for equilibrium solving. We propose a pessimism-based algorithm, dubbed as pessimistic minimax value iteration (PMVI), which overcomes the distributional shift by constructing pessimistic estimates of the value functions for both players and outputs a policy pair by solving NEs based on the two value functions. Furthermore, we establish a data-dependent upper bound on the suboptimality which recovers a sublinear rate without the assumption on uniform coverage of the dataset. We also prove an information-theoretical lower bound, which suggests that the data-dependent term in the upper bound is intrinsic. Our theoretical results also highlight a notion of "relative uncertainty", which characterizes the necessary and sufficient condition for achieving sample efficiency in offline MGs. To the best of our knowledge, we provide the first nearly minimax optimal result for offline MGs with function approximation.) <|cite_end|> <|cite_start|> (Reference: When are Offline Two-Player Zero-Sum Markov Games Solvable?: We study what dataset assumption permits solving offline two-player zero-sum Markov games. In stark contrast to the offline single-agent Markov decision process, we show that the single strategy concentration assumption is insufficient for learning the Nash equilibrium (NE) strategy in offline two-player zero-sum Markov games. On the other hand, we propose a new assumption named unilateral concentration and design a pessimism-type algorithm that is provably efficient under this assumption. In addition, we show that the unilateral concentration assumption is necessary for learning an NE strategy. Furthermore, our algorithm can achieve minimax sample complexity without any modification for two widely studied settings: dataset with uniform concentration assumption and turn-based Markov games. Our work serves as an important initial step towards understanding offline multi-agent reinforcement learning.) <|cite_end|> <|cite_start|> (Reference: Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game: Offline reinforcement learning (RL) aims at learning an optimal strategy using a pre-collected dataset without further interactions with the environment. While various algorithms have been proposed for offline RL in the previous literature, the minimax optimality has only been (nearly) established for tabular Markov decision processes (MDPs). In this paper, we focus on offline RL with linear function approximation and propose a new pessimism-based algorithm for offline linear MDP. At the core of our algorithm is the uncertainty decomposition via a reference function, which is new in the literature of offline RL under linear function approximation. Theoretical analysis demonstrates that our algorithm can match the performance lower bound up to logarithmic factors. We also extend our techniques to the two-player zero-sum Markov games (MGs), and establish a new performance lower bound for MGs, which tightens the existing result, and verifies the nearly minimax optimality of the proposed algorithm. To the best of our knowledge, these are the first computationally efficient and nearly minimax optimal algorithms for offline single-agent MDPs and MGs with linear function approximation.) <|cite_end|> <|cite_start|> (Reference: Settling the Sample Complexity of Model-Based Offline Reinforcement Learning: This paper is concerned with offline reinforcement learning (RL), which learns using pre-collected data without further exploration. Effective offline RL would be able to accommodate distribution shift and limited data coverage. However, prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality, thus posing an impediment to efficient offline RL in sample-starved applications. We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost for tabular Markov decision processes (MDPs). Concretely, consider a finite-horizon (resp. $\gamma$-discounted infinite-horizon) MDP with $S$ states and horizon $H$ (resp. effective horizon $\frac{1}{1-\gamma}$), and suppose the distribution shift of data is reflected by some single-policy clipped concentrability coefficient $C^{\star}_{\text{clipped}}$. We prove that model-based offline RL yields $\varepsilon$-accuracy with a sample complexity of \[ \begin{cases} \frac{H^{4}SC_{\text{clipped}}^{\star}}{\varepsilon^{2}} & (\text{finite-horizon MDPs}) \frac{SC_{\text{clipped}}^{\star}}{(1-\gamma)^{3}\varepsilon^{2}} & (\text{infinite-horizon MDPs}) \end{cases} \] up to log factor, which is minimax optimal for the entire $\varepsilon$-range. The proposed algorithms are "pessimistic" variants of value iteration with Bernstein-style penalties, and do not require sophisticated variance reduction. Our analysis framework is established upon delicate leave-one-out decoupling arguments in conjunction with careful self-bounding techniques tailored to MDPs.) <|cite_end|> <|cite_start|> (Reference: Adversarially Trained Actor Critic for Offline Reinforcement Learning: We propose Adversarially Trained Actor Critic (ATAC), a new model-free algorithm for offline reinforcement learning (RL) under insufficient data coverage, based on the concept of relative pessimism. ATAC is designed as a two-player Stackelberg game: A policy actor competes against an adversarially trained value critic, who finds data-consistent scenarios where the actor is inferior to the data-collection behavior policy. We prove that, when the actor attains no regret in the two-player game, running ATAC produces a policy that provably 1) outperforms the behavior policy over a wide range of hyperparameters that control the degree of pessimism, and 2) competes with the best policy covered by data with appropriately chosen hyperparameters. Compared with existing works, notably our framework offers both theoretical guarantees for general function approximation and a deep RL implementation scalable to complex environments and large datasets. In the D4RL benchmark, ATAC consistently outperforms state-of-the-art offline RL algorithms on a range of continuous control tasks.) <|cite_end|>dedicating to the development of pessimism-based algorithms for offline RL. These algorithms can provably efficiently find near-optimal policies with only the partial coverage condition. However, these works do not consider the corrupted data. Moreover, when the cumulative corruption level is sublinear, i.e., $\zeta = o(N)$, our algorithm can handle the corrupted data under similar partial coverage assumptions.
\paragraph{Robust RL.} One type of robust RL is the distributionally robust RL, which aims to learn a policy that optimizes the worst-case performance across MDPs within an uncertainty set, typically framed as a Robust MDP problem <|cite_start|> (Reference: {Robustness in Markov decision problems with uncertain transition matrices: Optimal solutions to Markov Decision Problems (MDPs) are very sensitive with respect to the state transition probabilities. In many practical problems, the estimation of those probabilities is far from accurate. Hence, estimation errors are limiting factors in applying MDPs to real-world problems. We propose an algorithm for solving finite-state and finite-action MDPs, where the solution is guaranteed to be robust with respect to estimation errors on the state transition probabilities. Our algorithm involves a statistically accurate yet numerically efficient representation of uncertainty, via Kullback-Leibler divergence bounds. The worst-case complexity of the robust algorithm is the same as the original Bellman recursion. Hence, robustness can be added at practically no extra computing cost.) <|cite_end|> <|cite_start|> (Reference: {Robust dynamic programming: In this paper we propose a robust formulation for discrete time dynamic programming (DP). The objective of the robust formulation is to systematically mitigate the sensitivity of the DP optimal policy to ambiguity in the underlying transition probabilities. The ambiguity is modeled by associating a set of conditional measures with each state-action pair. Consequently, in the robust formulation each policy has a set of measures associated with it. We prove that when this set of measures has a certain "rectangularity" property, all of the main results for finite and infinite horizon DP extend to natural robust counterparts. We discuss techniques from Nilim and El Ghaoui [17] for constructing suitable sets of conditional measures that allow one to efficiently solve for the optimal robust policy. We also show that robust DP is equivalent to stochastic zero-sum games with perfect information.) <|cite_end|> <|cite_start|> (Reference: {Fast Bellman Updates for Robust MDPs: We describe two efficient, and exact, algorithms for computing Bellman updates in robust Markov decision processes (MDPs). The first algorithm uses a homotopy continuation method to compute updates for L1-constrained s, a-rectangular ambiguity sets. It runs in quasi-linear time for plain L1 norms and also generalizes to weighted L1 norms. The second algorithm uses bisection to compute updates for robust MDPs with s-rectangular ambiguity sets. This algorithm, when combined with the homotopy method, also has a quasi-linear runtime. Unlike previous methods, our algorithms compute the primal solution in addition to the optimal objective value, which makes them useful in policy iteration methods. Our experimental results indicate that the proposed methods are over 1,000 times faster than Gurobi, a state-of-the-art commercial optimization package, for small instances, and the performance gap grows considerably with problem size.) <|cite_end|> <|cite_start|> (Reference: {Robust Reinforcement Learning: A Review of Foundations and Recent Advances: Reinforcement learning (RL) has become a highly successful framework for learning in Markov decision processes (MDP). Due to the adoption of RL in realistic and complex environments, solution robustness becomes an increasingly important aspect of RL deployment. Nevertheless, current RL algorithms struggle with robustness to uncertainty, disturbances, or structural changes in the environment. We survey the literature on robust approaches to reinforcement learning and categorize these methods in four different ways: (i) Transition robust designs account for uncertainties in the system dynamics by manipulating the transition probabilities between states; (ii) Disturbance robust designs leverage external forces to model uncertainty in the system behavior; (iii) Action robust designs redirect transitions of the system by corrupting an agent’s output; (iv) Observation robust designs exploit or distort the perceived system state of the policy. Each of these robust designs alters a different aspect of the MDP. Additionally, we address the connection of robustness to the risk-based and entropy-regularized RL formulations. The resulting survey covers all fundamental concepts underlying the approaches to robust reinforcement learning and their recent advances.) <|cite_end|>. Recently, <|cite_start|> (Reference: Provable Sim-to-real Transfer in Continuous Domain with Partial Observations: Sim-to-real transfer trains RL agents in the simulated environments and then deploys them in the real world. Sim-to-real transfer has been widely used in practice because it is often cheaper, safer and much faster to collect samples in simulation than in the real world. Despite the empirical success of the sim-to-real transfer, its theoretical foundation is much less understood. In this paper, we study the sim-to-real transfer in continuous domain with partial observations, where the simulated environments and real-world environments are modeled by linear quadratic Gaussian (LQG) systems. We show that a popular robust adversarial training algorithm is capable of learning a policy from the simulated environment that is competitive to the optimal policy in the real-world environment. To achieve our results, we design a new algorithm for infinite-horizon average-cost LQGs and establish a regret bound that depends on the intrinsic complexity of the model class. Our algorithm crucially relies on a novel history clipping scheme, which might be of independent interest.) <|cite_end|>prove that distributionally robust RL can effectively reduce the sim-to-real gap. Besides, numerous studies in the online setting have explored robustness to perturbations on observations <|cite_start|> (Reference: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations: A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises. Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions. Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles. We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks. We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and develop a theoretically principled policy regularization which can be applied to a large family of DRL algorithms, including proximal policy optimization (PPO), deep deterministic policy gradient (DDPG) and deep Q networks (DQN), for both discrete and continuous action control problems. We significantly improve the robustness of PPO, DDPG and DQN agents under a suite of strong white box adversarial attacks, including new attacks of our own. Additionally, we find that a robust policy noticeably improves DRL performance even without an adversary in a number of environments. Our code is available at https://github.com/chenhongge/StateAdvDRL.) <|cite_end|> <|cite_start|> (Reference: Robust Reinforcement Learning on State Observations with Learned Optimal Adversary: We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise. With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found, which is guaranteed to obtain the worst case agent reward. For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones. To enhance the robustness of an agent, we propose a framework of alternating training with learned adversaries (ATLA), which trains an adversary online together with the agent using policy gradient following the optimal adversarial attack framework. Additionally, inspired by the analysis of state-adversarial Markov decision process (SA-MDP), we show that past states and actions (history) can be useful for learning a robust agent, and we empirically find a LSTM based policy can be more robust under adversaries. Empirical evaluations on a few continuous control environments show that ATLA achieves state-of-the-art performance under strong adversaries. Our code is available at https://github.com/huanzhang12/ATLA_robust_RL.) <|cite_end|>, actions <|cite_start|> (Reference: Robust Adversarial Reinforcement Learning: Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H-infinity control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces/disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced -- that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper and Walker2d) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training/test conditions; and c) outperform the baseline even in the absence of the adversary.) <|cite_end|> <|cite_start|> (Reference: Action Robust Reinforcement Learning and Applications in Continuous Control: A policy is said to be robust if it maximizes the reward while considering a bad, or even adversarial, model. In this work we formalize two new criteria of robustness to action uncertainty. Specifically, we consider two scenarios in which the agent attempts to perform an action $a$, and (i) with probability $\alpha$, an alternative adversarial action $\bar a$ is taken, or (ii) an adversary adds a perturbation to the selected action in the case of continuous action space. We show that our criteria are related to common forms of uncertainty in robotics domains, such as the occurrence of abrupt forces, and suggest algorithms in the tabular case. Building on the suggested algorithms, we generalize our approach to deep reinforcement learning (DRL) and provide extensive experiments in the various MuJoCo domains. Our experiments show that not only does our approach produce robust policies, but it also improves the performance in the absence of perturbations. This generalization indicates that action-robustness can be thought of as implicit regularization in RL problems.) <|cite_end|>, rewards <|cite_start|> (Reference: Reinforcement Learning with Perturbed Rewards: Recent studies have shown that reinforcement learning (RL) models are vulnerable in various noisy scenarios. For instance, the observed reward channel is often subject to noise in practice (e.g., when rewards are collected through sensors), and is therefore not credible. In addition, for applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors by receiving corrupted rewards. In this paper, we consider noisy RL problems with perturbed rewards, which can be approximated with a confusion matrix. We develop a robust RL framework that enables agents to learn in noisy environments where only perturbed rewards are observed. Our solution framework builds on existing RL/DRL algorithms and firstly addresses the biased noisy reward setting without any assumptions on the true distribution (e.g., zero-mean Gaussian noise as made in previous works). The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that trained policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 84.6% and 80.8% improvements on average score for five Atari games, with error rates as 10% and 30% respectively.) <|cite_end|> <|cite_start|> (Reference: Regularized Policies are Reward Robust: Entropic regularization of policies in Reinforcement Learning (RL) is a commonly used heuristic to ensure that the learned policy explores the state-space sufficiently before overfitting to a local optimal policy. The primary motivation for using entropy is for exploration and disambiguating optimal policies; however, the theoretical effects are not entirely understood. In this work, we study the more general regularized RL objective and using Fenchel duality; we derive the dual problem which takes the form of an adversarial reward problem. In particular, we find that the optimal policy found by a regularized objective is precisely an optimal policy of a reinforcement learning problem under a worst-case adversarial reward. Our result allows us to reinterpret the popular entropic regularization scheme as a form of robustification. Furthermore, due to the generality of our results, we apply to other existing regularization schemes. Our results thus give insights into the effects of regularization of policies and deepen our understanding of exploration through robust rewards at large.) <|cite_end|>, and dynamics <|cite_start|> (Reference: Robust Reinforcement Learning for Continuous Control with Model Misspecification: We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms. We specifically focus on incorporating robustness into a state-of-the-art continuous control RL algorithm called Maximum a-posteriori Policy Optimization (MPO). We achieve this by learning a policy that optimizes for a worst case expected return objective and derive a corresponding robust entropy-regularized Bellman contraction operator. In addition, we introduce a less conservative, soft-robust, entropy-regularized objective with a corresponding Bellman operator. We show that both, robust and soft-robust policies, outperform their non-robust counterparts in nine Mujoco domains with environment perturbations. In addition, we show improved robust performance on a high-dimensional, simulated, dexterous robotic hand. Finally, we present multiple investigative experiments that provide a deeper insight into the robustness framework. This includes an adaptation to another continuous control RL algorithm as well as learning the uncertainty set from offline data. Performance videos can be found online at https://sites.google.com/view/robust-rl.) <|cite_end|>. There is also a line of theory works <|cite_start|> (Reference: Corruption-robust exploration in episodic reinforcement learning: We initiate the study of multi-stage episodic reinforcement learning under adversarial corruptions in both the rewards and the transition probabilities of the underlying system extending recent results for the special case of stochastic bandits. We provide a framework which modifies the aggressive exploration enjoyed by existing reinforcement learning approaches based on "optimism in the face of uncertainty", by complementing them with principles from "action elimination". Importantly, our framework circumvents the major challenges posed by naively applying action elimination in the RL setting, as formalized by a lower bound we demonstrate. Our framework yields efficient algorithms which (a) attain near-optimal regret in the absence of corruptions and (b) adapt to unknown levels corruption, enjoying regret guarantees which degrade gracefully in the total corruption encountered. To showcase the generality of our approach, we derive results for both tabular settings (where states and actions are finite) as well as linear-function-approximation settings (where the dynamics and rewards admit a linear underlying representation). Notably, our work provides the first sublinear regret guarantee which accommodates any deviation from purely i.i.d. transitions in the bandit-feedback model for episodic reinforcement learning.) <|cite_end|> <|cite_start|> (Reference: On reinforcement learning with adversarial corruption and its application to block mdp: We study reinforcement learning (RL) in episodic tabular MDPs with adversarial corruptions, where some episodes can be adversarially corrupted. When the total number of corrupted episodes is known, we propose an algorithm, Cor-ruption Robust Monotonic Value Propagation ( CR-MVP ), which achieves a regret bound of ˜ O (cid:16)(cid:16) √ SAK + S 2 A + CSA ) (cid:17) polylog ( H ) (cid:17) , where S is the number of states, A is the number of actions, H is the planning horizon, K is the number of episodes, and C is the known corruption level. We also provide a novel lower bound, which indicates that our upper bound is nearly tight. Finally, as an application, we study RL with rich observations in the block MDP model. We provide the first algorithm that achieves a √ K - type regret in this setting and is oracle efficient.) <|cite_end|> <|cite_start|> (Reference: A Model Selection Approach for Corruption Robust Reinforcement Learning: We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of $\widetilde{\mathcal{O}}(\min\{\frac{1}{\Delta}, \sqrt{T}\}+C)$ where $T$ is the number of episodes, $C$ is the total amount of corruption, and $\Delta$ is the reward gap between the best and the second-best policy. This is the first worst-case optimal bound achieved without knowledge of $C$, improving previous results of Lykouris et al. (2021); Chen et al. (2021); Wu et al. (2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of $\widetilde{\mathcal{O}}(\sqrt{(1+C)T})$, and another computationally inefficient one with $\widetilde{\mathcal{O}}(\sqrt{T}+C)$, improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al. (2021b). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results.) <|cite_end|> <|cite_start|> (Reference: Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes: Despite the significant interest and progress in reinforcement learning (RL) problems with adversarial corruption, current works are either confined to the linear setting or lead to an undesired $\tilde{O}(\sqrt{T}\zeta)$ regret bound, where $T$ is the number of rounds and $\zeta$ is the total amount of corruption. In this paper, we consider the contextual bandit with general function approximation and propose a computationally efficient algorithm to achieve a regret of $\tilde{O}(\sqrt{T}+\zeta)$. The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit and a new weighted estimator of uncertainty for the general function class. In contrast to the existing analysis that heavily relies on the linear structure, we develop a novel technique to control the sum of weighted uncertainty, thus establishing the final regret bounds. We then generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $\zeta$ in the scenario of general function approximation. Notably, our algorithms achieve regret bounds either nearly match the performance lower bound or improve the existing methods for all the corruption levels and in both known and unknown $\zeta$ cases.) <|cite_end|>studying online corruption-robust RL.
In the offline setting, a number of works focus on testing-time (distributional) robustness in offline RL <|cite_start|> (Reference: Finite-sample regret bound for distributionally robust offline tabular reinforcement learning: While reinforcement learning has witnessed tremendous success recently in a wide range of domains, robustness–or the lack thereof– remains an important issue that has not been fully explored. In this paper, we provide a distributionally robust formulation of of-fline learning policy in tabular RL that aims to learn a policy from historical data (collected by some other behavior policy) that is robust to the future environment that can deviate from the training environment. We first develop a novel policy evaluation scheme that accurately estimates the robust value (i.e. how robust it is in a perturbed environment) of any given policy and establish its finite-sample estimation error. Building on this, we then develop a novel and minimax-optimal distributionally robust learning algo-rithm that achieves O P (1 / √ n ) regret, meaning that with high probability, the policy learned from using n training data points will be O (1 / √ n ) close to the optimal distribu-tionally robust policy. Finally, our simulation results demonstrate the superiority of our dis-tributionally robust approach compared to non-robust RL algorithms.) <|cite_end|> <|cite_start|> (Reference: Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity: This paper concerns the central issues of model robustness and sample efficiency in offline reinforcement learning (RL), which aims to learn to perform decision making from history data without active exploration. Due to uncertainties and variabilities of the environment, it is critical to learn a robust policy -- with as few samples as possible -- that performs well even when the deployed environment deviates from the nominal one used to collect the history dataset. We consider a distributionally robust formulation of offline RL, focusing on tabular robust Markov decision processes with an uncertainty set specified by the Kullback-Leibler divergence in both finite-horizon and infinite-horizon settings. To combat with sample scarcity, a model-based algorithm that combines distributionally robust value iteration with the principle of pessimism in the face of uncertainty is proposed, by penalizing the robust value estimates with a carefully designed data-driven penalty term. Under a mild and tailored assumption of the history dataset that measures distribution shift without requiring full coverage of the state-action space, we establish the finite-sample complexity of the proposed algorithms. We further develop an information-theoretic lower bound, which suggests that learning RMDPs is at least as hard as the standard MDPs when the uncertainty level is sufficient small, and corroborates the tightness of our upper bound up to polynomial factors of the (effective) horizon length for a range of uncertainty levels. To the best our knowledge, this provides the first provably near-optimal robust offline RL algorithm that learns under model uncertainty and partial coverage.) <|cite_end|> <|cite_start|> (Reference: Provable Sim-to-real Transfer in Continuous Domain with Partial Observations: Sim-to-real transfer trains RL agents in the simulated environments and then deploys them in the real world. Sim-to-real transfer has been widely used in practice because it is often cheaper, safer and much faster to collect samples in simulation than in the real world. Despite the empirical success of the sim-to-real transfer, its theoretical foundation is much less understood. In this paper, we study the sim-to-real transfer in continuous domain with partial observations, where the simulated environments and real-world environments are modeled by linear quadratic Gaussian (LQG) systems. We show that a popular robust adversarial training algorithm is capable of learning a policy from the simulated environment that is competitive to the optimal policy in the real-world environment. To achieve our results, we design a new algorithm for infinite-horizon average-cost LQGs and establish a regret bound that depends on the intrinsic complexity of the model class. Our algorithm crucially relies on a novel history clipping scheme, which might be of independent interest.) <|cite_end|> <|cite_start|> (Reference: RORL: Robust Offline Reinforcement Learning via Conservative Smoothing: Offline reinforcement learning (RL) provides a promising direction to exploit massive amount of offline data for complex decision-making tasks. Due to the distribution shift issue, current offline RL algorithms are generally designed to be conservative in value estimation and action selection. However, such conservatism can impair the robustness of learned policies when encountering observation deviation under realistic conditions, such as sensor errors and adversarial attacks. To trade off robustness and conservatism, we propose Robust Offline Reinforcement Learning (RORL) with a novel conservative smoothing technique. In RORL, we explicitly introduce regularization on the policy and the value function for states near the dataset, as well as additional conservative value estimation on these states. Theoretically, we show RORL enjoys a tighter suboptimality bound than recent theoretical results in linear MDPs. We demonstrate that RORL can achieve state-of-the-art performance on the general offline RL benchmark and is considerably robust to adversarial observation perturbations.) <|cite_end|> <|cite_start|> (Reference: Robust Reinforcement Learning using Offline Data: The goal of robust reinforcement learning (RL) is to learn a policy that is robust against the uncertainty in model parameters. Parameter uncertainty commonly occurs in many real-world RL applications due to simulator modeling errors, changes in the real-world system dynamics over time, and adversarial disturbances. Robust RL is typically formulated as a max-min problem, where the objective is to learn the policy that maximizes the value against the worst possible models that lie in an uncertainty set. In this work, we propose a robust RL algorithm called Robust Fitted Q-Iteration (RFQI), which uses only an offline dataset to learn the optimal robust policy. Robust RL with offline data is significantly more challenging than its non-robust counterpart because of the minimization over all models present in the robust Bellman operator. This poses challenges in offline data collection, optimization over the models, and unbiased estimation. In this work, we propose a systematic approach to overcome these challenges, resulting in our RFQI algorithm. We prove that RFQI learns a near-optimal robust policy under standard assumptions and demonstrate its superior performance on standard benchmark problems.) <|cite_end|> <|cite_start|> (Reference: Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage: In this paper, we study distributionally robust offline reinforcement learning (robust offline RL), which seeks to find an optimal policy purely from an offline dataset that can perform well in perturbed environments. In specific, we propose a generic algorithm framework called Doubly Pessimistic Model-based Policy Optimization ($P^2MPO$), which features a novel combination of a flexible model estimation subroutine and a doubly pessimistic policy optimization step. Notably, the double pessimism principle is crucial to overcome the distributional shifts incurred by (i) the mismatch between the behavior policy and the target policies; and (ii) the perturbation of the nominal model. Under certain accuracy conditions on the model estimation subroutine, we prove that $P^2MPO$ is sample-efficient with robust partial coverage data, which only requires the offline data to have good coverage of the distributions induced by the optimal robust policy and the perturbed models around the nominal model. By tailoring specific model estimation subroutines for concrete examples of RMDPs, including tabular RMDPs, factored RMDPs, kernel and neural RMDPs, we prove that $P^2MPO$ enjoys a $\tilde{\mathcal{O}}(n^{-1/2})$ convergence rate, where $n$ is the dataset size. We highlight that all these examples, except tabular RMDPs, are first identified and proven tractable by this work. Furthermore, we continue our study of robust offline RL in the robust Markov games (RMGs). By extending the double pessimism principle identified for single-agent RMDPs, we propose another algorithm framework that can efficiently find the robust Nash equilibria among players using only robust unilateral (partial) coverage data. To our best knowledge, this work proposes the first general learning principle -- double pessimism -- for robust offline RL and shows that it is provably efficient with general function approximation.) <|cite_end|>. Regarding the training-time robustness of offline RL, <|cite_start|> (Reference: Survival Instinct in Offline Reinforcement Learning: We present a novel observation about the behavior of offline reinforcement learning (RL) algorithms: on many benchmark datasets, offline RL can produce well-performing and safe policies even when trained with "wrong" reward labels, such as those that are zero everywhere or are negatives of the true rewards. This phenomenon cannot be easily explained by offline RL's return maximization objective. Moreover, it gives offline RL a degree of robustness that is uncharacteristic of its online RL counterparts, which are known to be sensitive to reward design. We demonstrate that this surprising robustness property is attributable to an interplay between the notion of pessimism in offline RL algorithms and certain implicit biases in common data collection practices. As we prove in this work, pessimism endows the agent with a "survival instinct", i.e., an incentive to stay within the data support in the long term, while the limited and biased data coverage further constrains the set of survival policies. Formally, given a reward class -- which may not even contain the true reward -- we identify conditions on the training data distribution that enable offline RL to learn a near-optimal and safe policy from any reward within the class. We argue that the survival instinct should be taken into account when interpreting results from existing offline RL benchmarks and when creating future ones. Our empirical and theoretical results suggest a new paradigm for RL, whereby an agent is nudged to learn a desirable behavior with imperfect reward but purposely biased data coverage.) <|cite_end|>investigated reward attacks in offline RL, revealing that certain dataset biases can implicitly enhance offline RL's resilience to reward corruption. <|cite_start|> (Reference: COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks: As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention. While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defenses, its robustness against training-time (poisoning) attacks remains largely unanswered. In this work, we focus on certifying the robustness of offline RL in the presence of poisoning attacks, where a subset of training trajectories could be arbitrarily manipulated. We propose the first certification framework, COPA, to certify the number of poisoning trajectories that can be tolerated regarding different certification criteria. Given the complex structure of RL, we propose two certification criteria: per-state action stability and cumulative reward bound. To further improve the certification, we propose new partition and aggregation protocols to train robust policies. We further prove that some of the proposed certification methods are theoretically tight and some are NP-Complete problems. We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties. All experimental results are available at https://copa-leaderboard.github.io.) <|cite_end|>propose a certification framework designed to ascertain the number of tolerable poisoning trajectories in relation to various certification criteria. From a purely theoretical perspective, <|cite_start|> (Reference: Corruption-Robust Offline Reinforcement Learning: We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples $(s, a, r, s')$, an adversary is allowed to arbitrarily modify $\epsilon$ fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case $\Omega(d\epsilon)$ optimality gap is unavoidable in linear MDP of dimension $d$, even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of $\epsilon$ to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of $\epsilon$ is necessary, as we show that being adaptive to unknown $\epsilon$ is impossible.This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.) <|cite_end|>studied offline RL under data corruption. One concurrent work <|cite_start|> (Reference: Corruption-Robust Offline Reinforcement Learning with General Function Approximation: We investigate the problem of corruption robustness in offline reinforcement learning (RL) with general function approximation, where an adversary can corrupt each sample in the offline dataset, and the corruption level $\zeta\geq0$ quantifies the cumulative corruption amount over $n$ episodes and $H$ steps. Our goal is to find a policy that is robust to such corruption and minimizes the suboptimality gap with respect to the optimal policy for the uncorrupted Markov decision processes (MDPs). Drawing inspiration from the uncertainty-weighting technique from the robust online RL setting \citep{he2022nearly,ye2022corruptionrobust}, we design a new uncertainty weight iteration procedure to efficiently compute on batched samples and propose a corruption-robust algorithm for offline RL. Notably, under the assumption of single policy coverage and the knowledge of $\zeta$, our proposed algorithm achieves a suboptimality bound that is worsened by an additive factor of $\mathcal{O}(\zeta (C(\widehat{\mathcal{F}},\mu)n)^{-1})$ due to the corruption. Here $\widehat{\mathcal{F}}$ is the confidence set, and the dataset $\mathcal{Z}_n^H$, and $C(\widehat{\mathcal{F}},\mu)$ is a coefficient that depends on $\widehat{\mathcal{F}}$ and the underlying data distribution $\mu$. When specialized to linear MDPs, the corruption-dependent error term reduces to $\mathcal{O}(\zeta d n^{-1})$ with $d$ being the dimension of the feature map, which matches the existing lower bound for corrupted linear MDPs. This suggests that our analysis is tight in terms of the corruption-dependent term.) <|cite_end|>leverages uncertainty weighting to tackle reward and dynamics corruption with theoretical guarantees. Different from these works, we propose an algorithm that is both provable and practical under diverse data corruption on all elements.
\paragraph{Robust Imitation Learning.} Robust imitation learning focuses on imitating the expert policy using corrupted demonstrations <|cite_start|> (Reference: Robust Imitation Learning from Corrupted Demonstrations: We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers. Classical approaches such as Behavior Cloning assumes that demonstrations are collected by an presumably optimal expert, hence may fail drastically when learning from corrupted demonstrations. We propose a novel robust algorithm by minimizing a Median-of-Means (MOM) objective which guarantees the accurate estimation of policy, even in the presence of constant fraction of outliers. Our theoretical analysis shows that our robust method in the corrupted setting enjoys nearly the same error scaling and sample complexity guarantees as the classical Behavior Cloning in the expert demonstration setting. Our experiments on continuous-control benchmarks validate that our method exhibits the predicted robustness and effectiveness, and achieves competitive results compared to existing imitation learning methods.) <|cite_end|>or a mixture of expert and non-expert demonstrations <|cite_start|> (Reference: Imitation Learning from Imperfect Demonstration: Imitation learning (IL) aims to learn an optimal policy from demonstrations. However, such demonstrations are often imperfect since collecting optimal ones is costly. To effectively learn from imperfect demonstrations, we propose a novel approach that utilizes confidence scores, which describe the quality of demonstrations. More specifically, we propose two confidence-based IL methods, namely two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). We show that confidence scores given only to a small portion of sub-optimal demonstrations significantly improve the performance of IL both theoretically and empirically.) <|cite_end|> <|cite_start|> (Reference: Variational Imitation Learning with Diverse-quality Demonstrations: . (19) Since ftpφ,ωq “ Ftpφ,ω,ψq “ maxψ Ftpφ,ω,ψq, we have that fpφ,ωq “ maxψ Fpφ,ω,ψq. A.2. Lower-bound G Next, we derive the lower-bound G of gpφ,ωq “ logZφ,ω . We first derive a trivial lower-bound using a “general” variational distribution over trajectories and discuss its issue. Then, we derive a lower-bound presented in the paper by using a structured variational distribution. Recall that the normalization term Zφ,ω of the model pφ,ω is given by Zφ,ω “ K) <|cite_end|> <|cite_start|> (Reference: Robust Imitation Learning from Noisy Demonstrations: Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning. In this paper, we first theoretically show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss. Based on this theoretical finding, we then propose a new imitation learning method that optimizes the classification risk by effectively combining pseudo-labeling with co-training. Unlike existing methods, our method does not require additional labels or strict assumptions about noise distributions. Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Behavioral Cloning from Noisy Demonstrations: ) <|cite_end|>. These approaches primarily concentrate on noise or attacks on states and actions, without considering the future return. In contrast, robust offline RL faces the intricate challenges associated with corruption in rewards and dynamics.
\paragraph{Heavy-tailedness in RL.}
In the realm of RL, <|cite_start|> (Reference: No-Regret Reinforcement Learning with Heavy-Tailed Rewards: Reinforcement learning algorithms typically assume rewards to be sampled from light-tailed distributions, such as Gaussian or bounded. However, a wide variety of real-world systems generate rewards that follow heavy-tailed distributions. We consider such scenarios in the setting of undiscounted reinforcement learning. By constructing a lower bound, we show that the difficulty of learning heavy-tailed rewards asymptotically dominates the difficulty of learning transition probabilities. Leveraging techniques from robust mean estimation, we propose Heavy-UCRL2 and Heavy-Q-Learning, and show that they achieve near-optimal regret bounds in this setting. Our algorithms also naturally generalize to deep reinforcement learning applications; we instantiate Heavy-DQN as an example of this. We demonstrate that all of our algorithms outperform baselines on both synthetic MDPs and standard RL benchmarks.) <|cite_end|> <|cite_start|> (Reference: Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds: While numerous works have focused on devising efficient algorithms for reinforcement learning (RL) with uniformly bounded rewards, it remains an open question whether sample or time-efficient algorithms for RL with large state-action space exist when the rewards are \emph{heavy-tailed}, i.e., with only finite $(1+\epsilon)$-th moments for some $\epsilon\in(0,1]$. In this work, we address the challenge of such rewards in RL with linear function approximation. We first design an algorithm, \textsc{Heavy-OFUL}, for heavy-tailed linear bandits, achieving an \emph{instance-dependent} $T$-round regret of $\tilde{O}\big(d T^{\frac{1-\epsilon}{2(1+\epsilon)}} \sqrt{\sum_{t=1}^T \nu_t^2} + d T^{\frac{1-\epsilon}{2(1+\epsilon)}}\big)$, the \emph{first} of this kind. Here, $d$ is the feature dimension, and $\nu_t^{1+\epsilon}$ is the $(1+\epsilon)$-th central moment of the reward at the $t$-th round. We further show the above bound is minimax optimal when applied to the worst-case instances in stochastic and deterministic linear bandits. We then extend this algorithm to the RL settings with linear function approximation. Our algorithm, termed as \textsc{Heavy-LSVI-UCB}, achieves the \emph{first} computationally efficient \emph{instance-dependent} $K$-episode regret of $\tilde{O}(d \sqrt{H \mathcal{U}^*} K^\frac{1}{1+\epsilon} + d \sqrt{H \mathcal{V}^* K})$. Here, $H$ is length of the episode, and $\mathcal{U}^*, \mathcal{V}^*$ are instance-dependent quantities scaling with the central moment of reward and value functions, respectively. We also provide a matching minimax lower bound $\Omega(d H K^{\frac{1}{1+\epsilon}} + d \sqrt{H^3 K})$ to demonstrate the optimality of our algorithm in the worst case. Our result is achieved via a novel robust self-normalized concentration inequality that may be of independent interest in handling heavy-tailed noise in general online regression problems.) <|cite_end|>delved into the issue of heavy-tailed rewards in tabular Markov Decision Processes (MDPs) and function approximation, respectively. There is also a line of works <|cite_start|> (Reference: Bandits with heavy tail: The stochastic multi-armed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper we examine the bandit problem under the weaker assumption that the distributions have moments of order 1+\epsilon, for some $\epsilon \in (0,1]$. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when \epsilon <1.) <|cite_end|> <|cite_start|> (Reference: Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs: In linear stochastic bandits, it is commonly assumed that payoffs are with sub-Gaussian noises. In this paper, under a weaker assumption on noises, we study the problem of \underline{lin}ear stochastic {\underline b}andits with h{\underline e}avy-{\underline t}ailed payoffs (LinBET), where the distributions have finite moments of order $1+\epsilon$, for some $\epsilon\in (0,1]$. We rigorously analyze the regret lower bound of LinBET as $\Omega(T^{\frac{1}{1+\epsilon}})$, implying that finite moments of order 2 (i.e., finite variances) yield the bound of $\Omega(\sqrt{T})$, with $T$ being the total number of rounds to play bandits. The provided lower bound also indicates that the state-of-the-art algorithms for LinBET are far from optimal. By adopting median of means with a well-designed allocation of decisions and truncation based on historical information, we develop two novel bandit algorithms, where the regret upper bounds match the lower bound up to polylogarithmic factors. To the best of our knowledge, we are the first to solve LinBET optimally in the sense of the polynomial order on $T$. Our proposed algorithms are evaluated based on synthetic datasets, and outperform the state-of-the-art results.) <|cite_end|> <|cite_start|> (Reference: Nearly Optimal Regret for Stochastic Linear Bandits with Heavy-Tailed Payoffs: In this paper, we study the problem of stochastic linear bandits with finite action sets. Most of existing work assume the payoffs are bounded or sub-Gaussian, which may be violated in some scenarios such as financial markets. To settle this issue, we analyze the linear bandits with heavy-tailed payoffs, where the payoffs admit finite $1+\epsilon$ moments for some $\epsilon\in(0,1]$. Through median of means and dynamic truncation, we propose two novel algorithms which enjoy a sublinear regret bound of $\widetilde{O}(d^{\frac{1}{2}}T^{\frac{1}{1+\epsilon}})$, where $d$ is the dimension of contextual information and $T$ is the time horizon. Meanwhile, we provide an $\Omega(d^{\frac{\epsilon}{1+\epsilon}}T^{\frac{1}{1+\epsilon}})$ lower bound, which implies our upper bound matches the lower bound up to polylogarithmic factors in the order of $d$ and $T$ when $\epsilon=1$. Finally, we conduct numerical experiments to demonstrate the effectiveness of our algorithms and the empirical results strongly support our theoretical guarantees.) <|cite_end|> <|cite_start|> (Reference: Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs: Despite a large amount of effort in dealing with heavy-tailed error in machine learning, little is known when moments of the error can become non-existential: the random noise $\eta$ satisfies Pr$\left[|\eta| > |y|\right] \le 1/|y|^{\alpha}$ for some $\alpha > 0$. We make the first attempt to actively handle such super heavy-tailed noise in bandit learning problems: We propose a novel robust statistical estimator, mean of medians, which estimates a random variable by computing the empirical mean of a sequence of empirical medians. We then present a generic reductionist algorithmic framework for solving bandit learning problems (including multi-armed and linear bandit problem): the mean of medians estimator can be applied to nearly any bandit learning algorithm as a black-box filtering for its reward signals and obtain similar regret bound as if the reward is sub-Gaussian. We show that the regret bound is near-optimal even with very heavy-tailed noise. We also empirically demonstrate the effectiveness of the proposed algorithm, which further corroborates our theoretical results.) <|cite_end|> <|cite_start|> (Reference: Adaptive Best-of-Both-Worlds Algorithm for Heavy-Tailed Multi-Armed Bandits: In this paper, we generalize the concept of heavy-tailed multi-armed bandits to adversarial environments, and develop robust best-of-both-worlds algorithms for heavy-tailed multi-armed bandits (MAB), where losses have $\alpha$-th ($1<\alpha\le 2$) moments bounded by $\sigma^\alpha$, while the variances may not exist. Specifically, we design an algorithm \texttt{HTINF}, when the heavy-tail parameters $\alpha$ and $\sigma$ are known to the agent, \texttt{HTINF} simultaneously achieves the optimal regret for both stochastic and adversarial environments, without knowing the actual environment type a-priori. When $\alpha,\sigma$ are unknown, \texttt{HTINF} achieves a $\log T$-style instance-dependent regret in stochastic cases and $o(T)$ no-regret guarantee in adversarial cases. We further develop an algorithm \texttt{AdaTINF}, achieving $\mathcal O(\sigma K^{1-\nicefrac 1\alpha}T^{\nicefrac{1}{\alpha}})$ minimax optimal regret even in adversarial settings, without prior knowledge on $\alpha$ and $\sigma$. This result matches the known regret lower-bound (Bubeck et al., 2013), which assumed a stochastic environment and $\alpha$ and $\sigma$ are both known. To our knowledge, the proposed \texttt{HTINF} algorithm is the first to enjoy a best-of-both-worlds regret guarantee, and \texttt{AdaTINF} is the first algorithm that can adapt to both $\alpha$ and $\sigma$ to achieve optimal gap-indepedent regret bound in classical heavy-tailed stochastic MAB setting and our novel adversarial formulation.) <|cite_end|> <|cite_start|> (Reference: Heavy-tailed linear bandit with Huber regression: Linear bandit algorithms have been extensively studied and have shown successful in sequential decision tasks despite their simplicity. Many algorithms however work under the assumption that the reward is the sum of linear function of observed contexts and a sub-Gaussian error. In practical applications, errors can be heavy-tailed, especially in financial data. In such reward environments, algorithms designed for sub-Gaussian error may un-derexplore, resulting in suboptimal regret. In this paper, we relax the reward assumption and pro-pose a novel linear bandit algorithm which works well under heavy-tailed errors as well. The proposed algorithm utilizes Huber regression. When contexts are stochastic with positive definite co-variance matrix and the (1 + δ ) -th moment of the error is bounded by a constant, we show that the high-probability upper bound of the regret is O ( √ dT 11+ δ (log dT ) δ 1+ δ ) , where d is the dimen-sion of context variables, T is the time horizon, and δ ∈ (0 , 1] . This bound improves on the state-of-the-art regret bound of the Median of Means and Truncation algorithm by a factor of √ log T and √ d for the case where the time horizon T is unknown. We also remark that when δ = 1 , the order is the same as the regret bound of linear ban-dit algorithms designed for sub-Gaussian errors. We support our theoretical findings with synthetic experiments.) <|cite_end|>studying the heavy-tailed bandit, which is a special case of MDPs. Besides, <|cite_start|> (Reference: On proximal policy optimization’s heavy-tailed gradients: Modern policy gradient algorithms such as Proximal Policy Optimization (PPO) rely on an arsenal of heuristics, including loss clipping and gradient clipping, to ensure successful learning. These heuristics are reminiscent of techniques from robust statistics, commonly used for estimation in outlier-rich (``heavy-tailed'') regimes. In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function. We demonstrate that the gradients, especially for the actor network, exhibit pronounced heavy-tailedness and that it increases as the agent's policy diverges from the behavioral policy (i.e., as the agent goes further off policy). Further examination implicates the likelihood ratios and advantages in the surrogate reward as the main sources of the observed heavy-tailedness. We then highlight issues arising due to the heavy-tailed nature of the gradients. In this light, we study the effects of the standard PPO clipping heuristics, demonstrating that these tricks primarily serve to offset heavy-tailedness in gradients. Thus motivated, we propose incorporating GMOM, a high-dimensional robust estimator, into PPO as a substitute for three clipping tricks. Despite requiring less hyperparameter tuning, our method matches the performance of PPO (with all heuristics enabled) on a battery of MuJoCo continuous control tasks.) <|cite_end|>investigated the heavy-tailed gradients in the training of Proximal Policy Optimization. In contrast, our work addresses the heavy-tailed target distribution that emerges from data corruption.
\paragraph{Huber Loss in RL.}
The Huber loss, known for its robustness to outliers, has been widely employed in the Deep Q-Network (DQN) literature, <|cite_start|> (Reference: Distributional Reinforcement Learning with Quantile Regression: In reinforcement learning an agent interacts with the environment by taking actions and observing the next state and reward. When sampled probabilistically, these state transitions, rewards, and actions can all induce randomness in the observed long-term return. Traditionally, reinforcement learning algorithms average over this randomness to estimate the value function. In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean. That is, we examine methods of learning the value distribution instead of the value function. We give results that close a number of gaps between the theoretical and algorithmic results given by Bellemare, Dabney, and Munos (2017). First, we extend existing results to the approximate distribution setting. Second, we present a novel distributional reinforcement learning algorithm consistent with our theoretical formulation. Finally, we evaluate this new algorithm on the Atari 2600 games, observing that it significantly outperforms many of the recent improvements on DQN, including the related distributional algorithm C51.) <|cite_end|> <|cite_start|> (Reference: An Optimistic Perspective on Offline Reinforcement Learning: Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN replay dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN replay dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms trained on sufficiently large and diverse offline datasets can lead to high quality policies. The DQN replay dataset can serve as an offline RL benchmark and is open-sourced.) <|cite_end|> <|cite_start|> (Reference: Robust Losses for Learning Value Functions: Most value function learning algorithms in reinforcement learning are based on the mean squared (projected) Bellman error. However, squared errors are known to be sensitive to outliers, both skewing the solution of the objective and resulting in high-magnitude and high-variance gradients. To control these high-magnitude updates, typical strategies in RL involve clipping gradients, clipping rewards, rescaling rewards, or clipping errors. While these strategies appear to be related to robust losses -- like the Huber loss -- they are built on semi-gradient update rules which do not minimize a known loss. In this work, we build on recent insights reformulating squared Bellman errors as a saddlepoint optimization problem and propose a saddlepoint reformulation for a Huber Bellman error and Absolute Bellman error. We start from a formalization of robust losses, then derive sound gradient-based approaches to minimize these losses in both the online off-policy prediction and control settings. We characterize the solutions of the robust losses, providing insight into the problem settings where the robust losses define notably better solutions than the mean squared Bellman error. Finally, we show that the resulting gradient-based algorithms are more stable, for both prediction and control, with less sensitivity to meta-parameters.) <|cite_end|>. However, <|cite_start|> (Reference: Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research: Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.) <|cite_end|>reevaluated the Huber loss and discovered that it fails to outperform the MSE loss on MinAtar environments. In our study, we leverage the Huber loss to address the heavy-tailedness in Q targets caused by data corruption, and we demonstrate its remarkable effectiveness. <|paper_end|> | [
"<|reference_start|> {Fast Bellman Updates for Robust MDPs: We describe two efficient, and exact, algorithms for computing Bellman updates in robust Markov decision processes (MDPs). The first algorithm uses a homotopy continuation method to compute updates for L1-constrained s, a-rectangular ambiguity sets. It runs in quasi-linear time for plain L1 norms and also generalizes to weighted L1 norms. The second algorithm uses bisection to compute updates for robust MDPs with s-rectangular ambiguity sets. This algorithm, when combined with the homotopy method, also has a quasi-linear runtime. Unlike previous methods, our algorithms compute the primal solution in addition to the optimal objective value, which makes them useful in policy iteration methods. Our experimental results indicate that the proposed methods are over 1,000 times faster than Gurobi, a state-of-the-art commercial optimization package, for small instances, and the performance gap grows considerably with problem size. <|reference_end|>",
"<|reference_start|> A Model Selection Approach for Corruption Robust Reinforcement Learning: We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of $\\widetilde{\\mathcal{O}}(\\min\\{\\frac{1}{\\Delta}, \\sqrt{T}\\}+C)$ where $T$ is the number of episodes, $C$ is the total amount of corruption, and $\\Delta$ is the reward gap between the best and the second-best policy. This is the first worst-case optimal bound achieved without knowledge of $C$, improving previous results of Lykouris et al. (2021); Chen et al. (2021); Wu et al. (2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of $\\widetilde{\\mathcal{O}}(\\sqrt{(1+C)T})$, and another computationally inefficient one with $\\widetilde{\\mathcal{O}}(\\sqrt{T}+C)$, improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al. (2021b). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results. <|reference_end|>",
"<|reference_start|> Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage: In this paper, we study distributionally robust offline reinforcement learning (robust offline RL), which seeks to find an optimal policy purely from an offline dataset that can perform well in perturbed environments. In specific, we propose a generic algorithm framework called Doubly Pessimistic Model-based Policy Optimization ($P^2MPO$), which features a novel combination of a flexible model estimation subroutine and a doubly pessimistic policy optimization step. Notably, the double pessimism principle is crucial to overcome the distributional shifts incurred by (i) the mismatch between the behavior policy and the target policies; and (ii) the perturbation of the nominal model. Under certain accuracy conditions on the model estimation subroutine, we prove that $P^2MPO$ is sample-efficient with robust partial coverage data, which only requires the offline data to have good coverage of the distributions induced by the optimal robust policy and the perturbed models around the nominal model. By tailoring specific model estimation subroutines for concrete examples of RMDPs, including tabular RMDPs, factored RMDPs, kernel and neural RMDPs, we prove that $P^2MPO$ enjoys a $\\tilde{\\mathcal{O}}(n^{-1/2})$ convergence rate, where $n$ is the dataset size. We highlight that all these examples, except tabular RMDPs, are first identified and proven tractable by this work. Furthermore, we continue our study of robust offline RL in the robust Markov games (RMGs). By extending the double pessimism principle identified for single-agent RMDPs, we propose another algorithm framework that can efficiently find the robust Nash equilibria among players using only robust unilateral (partial) coverage data. To our best knowledge, this work proposes the first general learning principle -- double pessimism -- for robust offline RL and shows that it is provably efficient with general function approximation. <|reference_end|>",
"<|reference_start|> Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs: In linear stochastic bandits, it is commonly assumed that payoffs are with sub-Gaussian noises. In this paper, under a weaker assumption on noises, we study the problem of \\underline{lin}ear stochastic {\\underline b}andits with h{\\underline e}avy-{\\underline t}ailed payoffs (LinBET), where the distributions have finite moments of order $1+\\epsilon$, for some $\\epsilon\\in (0,1]$. We rigorously analyze the regret lower bound of LinBET as $\\Omega(T^{\\frac{1}{1+\\epsilon}})$, implying that finite moments of order 2 (i.e., finite variances) yield the bound of $\\Omega(\\sqrt{T})$, with $T$ being the total number of rounds to play bandits. The provided lower bound also indicates that the state-of-the-art algorithms for LinBET are far from optimal. By adopting median of means with a well-designed allocation of decisions and truncation based on historical information, we develop two novel bandit algorithms, where the regret upper bounds match the lower bound up to polylogarithmic factors. To the best of our knowledge, we are the first to solve LinBET optimally in the sense of the polynomial order on $T$. Our proposed algorithms are evaluated based on synthetic datasets, and outperform the state-of-the-art results. <|reference_end|>"
] | [
7,
19,
26,
39
] | {"<|multi_cite_6_1|>": "arxiv-263399", "<|multi_cite_6_2|>": "arxiv-183660", "<|multi_cite_6_3|>": "arxiv-207689", "<|multi_cite_6_4|>": "arxiv-270338", "<|cite_7|>": "arxiv-263399", "<|multi_cite_8_1|>": "ss-681884", "<|multi_cite_8_2|>": "arxiv-183660", "<|multi_cite_8_3|>": "arxiv-347907", "<|multi_cite_8_4|>": "arxiv-373552", "<|multi_cite_9_1|>": "arxiv-270338", "<|multi_cite_9_2|>": "arxiv-371419", "<|multi_cite_9_3|>": "arxiv-401121", "<|multi_cite_9_4|>": "arxiv-422709", "<|multi_cite_10_1|>": "ss-1351970", "<|multi_cite_10_2|>": "arxiv-439626", "<|multi_cite_10_3|>": "arxiv-425146", "<|multi_cite_10_4|>": "arxiv-439375", "<|multi_cite_10_5|>": "arxiv-505405", "<|multi_cite_1_1|>": "arxiv-347783", "<|multi_cite_1_2|>": "arxiv-405938", "<|cite_11|>": "arxiv-371419", "<|cite_12|>": "arxiv-422709", "<|cite_13|>": "arxiv-373552", "<|multi_cite_14_1|>": "ss-681884", "<|multi_cite_14_2|>": "arxiv-226496", "<|multi_cite_14_3|>": "arxiv-373552", "<|cite_15|>": "arxiv-263399", "<|multi_cite_16_1|>": "ss-681884", "<|multi_cite_16_2|>": "arxiv-183660", "<|multi_cite_16_3|>": "arxiv-226496", "<|multi_cite_16_4|>": "arxiv-347907", "<|multi_cite_16_5|>": "arxiv-373552", "<|multi_cite_16_6|>": "arxiv-345243", "<|multi_cite_16_7|>": "arxiv-388765", "<|multi_cite_16_8|>": "arxiv-492628", "<|multi_cite_17_1|>": "arxiv-270338", "<|multi_cite_17_2|>": "arxiv-267893", "<|multi_cite_17_3|>": "arxiv-371419", "<|multi_cite_17_4|>": "arxiv-401121", "<|multi_cite_17_5|>": "arxiv-425146", "<|multi_cite_17_6|>": "arxiv-422709", "<|multi_cite_17_7|>": "arxiv-478048", "<|multi_cite_18_1|>": "arxiv-371419", "<|multi_cite_18_2|>": "arxiv-422709", "<|cite_19|>": "arxiv-388765", "<|multi_cite_20_1|>": "ss-681884", "<|multi_cite_20_2|>": "arxiv-272396", "<|multi_cite_20_3|>": "arxiv-373552", "<|cite_21|>": "arxiv-510687", "<|multi_cite_22_1|>": "arxiv-312718", "<|multi_cite_22_2|>": "arxiv-329093", "<|multi_cite_22_3|>": "arxiv-361892", "<|multi_cite_22_4|>": "arxiv-347944", "<|multi_cite_22_5|>": "arxiv-346966", "<|multi_cite_22_6|>": "arxiv-354826", "<|multi_cite_22_7|>": "arxiv-402208", "<|multi_cite_22_8|>": "arxiv-399337", "<|multi_cite_22_9|>": "ss-826974", "<|multi_cite_22_10|>": "arxiv-423594", "<|multi_cite_22_11|>": "arxiv-412448", "<|multi_cite_22_12|>": "arxiv-397079", "<|multi_cite_23_1|>": "ss-733119", "<|multi_cite_23_2|>": "ss-1325515", "<|multi_cite_23_3|>": "ss-1898584", "<|multi_cite_23_4|>": "ss-1746752", "<|cite_35|>": "arxiv-457791", "<|multi_cite_24_1|>": "arxiv-254628", "<|multi_cite_24_2|>": "arxiv-316235", "<|multi_cite_25_1|>": "arxiv-118553", "<|multi_cite_25_2|>": "arxiv-188972", "<|multi_cite_26_1|>": "arxiv-174753", "<|multi_cite_26_2|>": "arxiv-315679", "<|cite_27|>": "arxiv-210291", "<|multi_cite_28_1|>": "arxiv-235227", "<|multi_cite_28_2|>": "ss-1837349", "<|multi_cite_28_3|>": "arxiv-372322", "<|multi_cite_28_4|>": "arxiv-468975", "<|multi_cite_29_1|>": "ss-1351970", "<|multi_cite_29_2|>": "arxiv-439626", "<|multi_cite_29_3|>": "arxiv-457791", "<|multi_cite_29_4|>": "arxiv-425146", "<|multi_cite_29_5|>": "arxiv-439375", "<|multi_cite_29_6|>": "arxiv-505405", "<|cite_36|>": "arxiv-513062", "<|cite_37|>": "arxiv-405938", "<|cite_2|>": "arxiv-347783", "<|cite_30|>": "arxiv-551525", "<|cite_31|>": "arxiv-395666", "<|multi_cite_32_1|>": "arxiv-189049", "<|multi_cite_32_2|>": "ss-1292541", "<|multi_cite_32_3|>": "arxiv-297636", "<|multi_cite_32_4|>": "ss-1184657", "<|multi_cite_3_1|>": "arxiv-323548", "<|multi_cite_3_2|>": "arxiv-514911", "<|multi_cite_33_1|>": "arxiv-35910", "<|multi_cite_33_2|>": "arxiv-177591", "<|multi_cite_33_3|>": "arxiv-262036", "<|multi_cite_33_4|>": "arxiv-377015", "<|multi_cite_33_5|>": "arxiv-395349", "<|multi_cite_33_6|>": "ss-826975", "<|cite_4|>": "ss-826973", "<|multi_cite_34_1|>": "arxiv-138344", "<|multi_cite_34_2|>": "arxiv-213802", "<|multi_cite_34_3|>": "arxiv-420153", "<|cite_5|>": "arxiv-306620"} |
1606.05703 | <|paper_start|> Title: A Survey of Pansharpening Methods with A New Band-Decoupled Variational Model
Abstract: A Survey of Pansharpening Methods with A New Band-Decoupled Variational Model: Most satellites decouple the acquisition of a panchromatic image at high spatial resolution from the acquisition of a multispectral image at lower spatial resolution. Pansharpening is a fusion technique used to increase the spatial resolution of the multispectral data while simultaneously preserving its spectral information. In this paper, we consider pansharpening as an optimization problem minimizing a cost function with a nonlocal regularization term. The energy functional which is to be minimized decouples for each band, thus permitting the application to misregistered spectral components. This requirement is achieved by dropping the, commonly used, assumption that relates the spectral and panchromatic modalities by a linear transformation. Instead, a new constraint that preserves the radiometric ratio between the panchromatic and each spectral component is introduced. An exhaustive performance comparison of the proposed fusion method with several classical and state-of-the-art pansharpening techniques illustrates its superiority in preserving spatial details, reducing color distortions, and avoiding the creation of aliasing artifacts.
Introduction
Many Earth observation satellites provide continuously growing quantities of remote sensing images useful for a wide range of both scientific and everyday tasks. Most of them, such as Ikonos, Landsat, Quickbird, and Pl{\'e}iades, decouple the acquisition of a panchromatic image at high spatial resolution from the acquisition of a multispectral image at lower spatial resolution. The wide range of wavelengths acquired by the panchromatic represents an accurate description of the geometry of the image, while each spectral component covers a reduced bandwidth range leading to a detailed color description. Spectral sensors typically produces larger pixel sizes, thus increasing the signal noise ratio of spectral images and reducing the transmission cost. As an example, Figure \ref{fig:datasetPleiades} displays the data captured by the Pl{\'e}iades satellite and furnished to us by the {\it Centre National d'{\'E}tudes Spatiales} (CNES). In this setting, pansharpening is the fusion process by which a high-resolution multispectral image is inferred.
\begin{figure}[t!]
\centering
\begin{subfigure}[c]{0.4\textwidth}
\includegraphics[width=1\textwidth]{pleiades_pan.png}
\caption*{Panchromatic}
\end{subfigure}
\hskip 0.01in
\begin{subfigure}[c]{0.4\textwidth}
\renewcommand{\arraystretch}{0.2}
\begin{tabular}{c@{\hskip 0.01in}c}
\includegraphics[width=0.498\textwidth]{pleiades_red.png} &
\includegraphics[width=0.498\textwidth]{pleiades_green.png} \\
\includegraphics[width=0.498\textwidth]{pleiades_blue.png} &
\includegraphics[width=0.498\textwidth]{pleiades_nir.png}
\\
\end{tabular}
\caption*{\hspace{0.25cm} Red, green, blue, near-infrared}
\end{subfigure}
\caption{Pl{\'e}iades scene of Toulouse (France) provided by Centre National d'{\'E}tudes Spatiales (CNES). The spatial resolution is $70$ cm per pixel for the panchromatic and $2.8$ m per pixel for each blue, green, red, and near-infrared band.}
\label{fig:datasetPleiades}
\end{figure}
In remote sensing, high spatial resolution is necessary to correctly detect shapes, edges and, in general, geometric structures, but different types of land are better classified using images with multiple spectral bands. Considering this trade-off, state-of-the-art techniques <|cite_start|> (Reference: Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics: Our framework is the synthesis of multispectral images (MS) at higher spatial resolution, which should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. This synthesis is performed with the help of a high spatial but low spectral resolution image: the panchromatic (Pan) image. The fusion of the Pan and MS images is classically referred as pan-sharpening. A fused product reaches good quality only if the characteristics and differences between input images are taken into account. Dissimilarities existing between these two data sets originate from two causes-different times and different spectral bands of acquisition. Remote sensing physics should be carefully considered while designing the fusion process. Because of the complexity of physics and the large number of unknowns, authors are led to make assumptions to drive their development. Weaknesses and strengths of each reported method are raised and confronted to these physical constraints. The conclusion of this critical survey of literature is that the choice in the assumptions for the development of a method is crucial, with the risk to drastically weaken fusion performance. It is also shown that the Amelioration de la Resolution Spatiale par Injection de Structures concept prevents from introducing spectral distortion into fused products and offers a reliable framework for further developments.) <|cite_end|> <|cite_start|> (Reference: Structuring contemporary remote sensing image fusion: The exploitation of multi-sensor images at pixel level is a widely implemented research field in Earth observation. In this context, image fusion plays an important role since it effectively combines complementary image content to enhance information contained in the individual datasets. This article presents an overview of the existing fusion techniques and their achievements for Earth scientists. This research started off with the compilation of a database on remote sensing image fusion journal publications. Research results were exploited, grouping the literature into different aspects of relevance. Six categories of information have been built according to the journal, the application, sensors that provided the images used in the case study, applied fusion techniques, areas of achievement, and on-going research highlighting unresolved questions and current science. This resulted in an overview on the categorisation of image fusion techniques, explanation of the various approaches used within a certain category, and description of particularities when dealing with the fusion of optical and radar imagery. Even though many researchers intend to find the best algorithm, there is a greater need to define an appropriate workflow prior to processing the imagery with the knowledge in all related fields, that is, remote sensing image fusion and the desired application to address the different aspects of error propagation.) <|cite_end|> <|cite_start|> (Reference: A Critical Comparison Among Pansharpening Algorithms: Pansharpening aims at fusing a multispectral and a panchromatic image, featuring the result of the processing with the spectral resolution of the former and the spatial resolution of the latter. In the last decades, many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking, and standardized implementations of the algorithms makes a thorough evaluation and comparison of the different pansharpening techniques difficult to achieve. In this paper, the authors attempt to fill this gap by providing a critical description and extensive comparisons of some of the main state-of-the-art pansharpening methods. In greater details, several pansharpening algorithms belonging to the component substitution or multiresolution analysis families are considered. Such techniques are evaluated through the two main protocols for the assessment of pansharpening results, i.e., based on the full- and reduced-resolution validations. Five data sets acquired by different satellites allow for a detailed comparison of the algorithms, characterization of their performances with respect to the different instruments, and consistency of the two validation procedures. In addition, the implementation of all the pansharpening techniques considered in this paper and the framework used for running the simulations, comprising the two validation procedures and the main assessment indexes, are collected in a MATLAB toolbox that is made available to the community.) <|cite_end|> aim at increasing the spatial resolution of the multispectral data by using the high frequencies of the companion panchromatic. In the literature, pansharpening methods are mainly labeled into two main classes, namely component substitution (CS) and multiresolution analysis (MRA). The former relies on the use of a color decorrelation transform that converts the upsampled low-resolution channels into a new color system that separates the spatial and the spectral details. Fusion occurs by partially or totally substituting the component which is supposed to contain the spatial geometry by the panchromatic and applying the transformation back. Examples of CS methods include Intensity-Hue-Saturation (IHS) transform <|cite_start|> (Reference: The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data: Several techniques have been developed to merge SPOT 10-m resolution panchromatic data with simultaneously-acquired 20-m resolution multispectral data. Normally, the objective of these procedures is to create a composite image of enhanced interpretability. That is, the effectively 10-m resolution multispectral images produced through the various merging methods contain the high resolution information of the respective panchromatic images while maintaining the basic color content of the original multispectral data. The utility of intensity-hue-saturation (IHS) transformation procedures for creating such composites under varying land cover conditions is illustrated. Correlation analysis of original multispectral image data and their counterparts in IHS composites indicates the need to consider carefully the potential influence alternative implementations of IHS procedures might have on the spectral characteristics of the resulting multiresolution products. The use of a weighted average of panchromatic and near-infrared data as a substitute for intensity in merged images was found to be particularly effective in this study. This approach has been used in the production of an experimental SPOT image map of Madison, Wisconsin, and vicinity.) <|cite_end|> <|cite_start|> (Reference: A new look at IHS-like image fusion methods: ) <|cite_end|> <|cite_start|> (Reference: A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery: Among various image fusion methods, intensity-hue-saturation (IHS) technique is capable of quickly merging the massive volumes of data. For IKONOS imagery, IHS can yield satisfactory "spatial" enhancement but may introduce "spectral" distortion, appearing as a change in colors between compositions of resampled and fused multispectral bands. To solve this problem, a fast IHS fusion technique with spectral adjustment is presented. The experimental results demonstrate that the proposed approach can provide better performance than the original IHS method, both in processing speed and image quality.) <|cite_end|>, Principal-Component-Analysis (PCA) transform <|cite_start|> (Reference: Extracting spectral contrast in landsat thematic mapper image data using selective principal component analysis: ) <|cite_end|> <|cite_start|> (Reference: Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic: The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thermatic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results) <|cite_end|>, Gram-Schmidt (GS) orthonormalization <|cite_start|> (Reference: {Improving component substitution pansharpening through multivariate regression of MS+Pan data: In this paper, multivariate regression is adopted to improve spectral quality, without diminishing spatial quality, in image fusion methods based on the well-established component substitution (CS) approach. A general scheme that is capable of modeling any CS image fusion method is presented and discussed. According to this scheme, a generalized intensity component is defined as the weighted average of the multispectral (MS) bands. The weights are obtained as regression coefficients between the MS bands and the spatially degraded panchromatic (Pan) image, with the aim of capturing the spectral responses of the sensors. Once it has been integrated into the Gram-Schmidt spectral-sharpening method, which is implemented in environment for visualizing images (ENVI) program, and into the generalized intensity-hue-saturation fusion method, the proposed preprocessing module allows the production of fused images of the same spatial sharpness but of increased spectral quality with respect to the standard implementations. In addition, quantitative scores carried out on spatially degraded data clearly confirm the superiority of the enhanced methods over their baselines.) <|cite_end|>, Brovey's <|cite_start|> (Reference: Image sharpening for mixed spatial and spectral resolution satellite systems: Two methods of image sharpening (reconstruction) are compared. The first, a spatial filtering technique, extrapolates edge information from a high spatial resolution panchromatic band at 10 meters and adds it to the low spatial resolution narrow spectral bands. The second method, a color normalizing technique, is based on the ability to separate image hue and brightness components in spectral data. Using both techniques, multispectral images are sharpened from 30, 50, 70, and 90 meter resolutions. Error rates are calculated for the two methods and all sharpened resolutions. The results indicate that the color normalizing method is superior to the spatial filtering technique.) <|cite_end|> <|cite_start|> (Reference: Color enhancement of highly correlated images. II. Channel ratio and “chromaticity” transformation techniques: ) <|cite_end|>, band-dependent spatial detail (BDSD) <|cite_start|> (Reference: Optimal mmse pan sharpening of very high resolution multispectral images: In this paper, we propose an optimum algorithm, in the minimum mean-square-error (mmse) sense, for panchromatic (Pan) sharpening of very high resolution multispectral (MS) images. The solution minimizes the squared error between the original MS image and the fusion result obtained by spatially enhancing a degraded version of the MS image through a degraded version, by the same scale factor, of the Pan image. The fusion result is also optimal at full scale under the assumption of invariance of the fusion parameters across spatial scales. The following two versions of the algorithm are presented: a local mmse (lmmse) solution and a fast implementation which globally optimizes the fusion parameters with a moderate performance loss with respect to the lmmse version. We show that the proposed method is computationally practical, even in the case of local optimization, and it outperforms the best state-of-the-art Pan-sharpening algorithms, as resulted from the IEEE Data Fusion Contest 2006, on true Ikonos and QuickBird data and on simulated Pleiades data.) <|cite_end|>, and partial replacement adaptive CS (PRACS) <|cite_start|> (Reference: A new adaptive component-substitution based satellite image fusion by using partial replacement: Preservation of spectral information and enhancement of spatial resolution are regarded as important issues in remote sensing satellite image fusion. In previous research, various algorithms have been proposed. Although they have been successful, there are still some margins of spatial and spectral quality that can be improved. In addition, a new method that can be used for various types of sensors is required. In this paper, a new adaptive fusion method based on component substitution is proposed to merge a high-spatial-resolution panchromatic (PAN) image with a multispectral image. This method generates high-/low-resolution synthetic component images by partial replacement and uses statistical ratio-based high-frequency injection. Various remote sensing satellite images, such as IKONOS-2, QuickBird, LANDSAT ETM+, and SPOT-5, were employed in the evaluation. Experiments showed that this approach can resolve spectral distortion problems and successfully conserve the spatial information of a PAN image. Thus, the fused image obtained from the proposed method gave higher fusion quality than the images from some other methods. In addition, the proposed method worked efficiently with the different sensors considered in the evaluation.) <|cite_end|>. On the contrary, MRA-based approaches inject the high frequencies of the panchromatic into the upsampled spectral components through a multiresolution decomposition. The fusion techniques from this family mainly differ in how the low-pass version of the panchromatic is generated at each scale. Laplacian pyramid <|cite_start|> (Reference: Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis: This paper compares two general and formal solutions to the problem of fusion of multispectral images with high-resolution panchromatic observations. The former exploits the undecimated discrete wavelet transform, which is an octave bandpass representation achieved from a conventional discrete wavelet transform by omitting all decimators and upsampling the wavelet filter bank. The latter relies on the generalized Laplacian pyramid, which is another oversampled structure obtained by recursively subtracting from an image an expanded decimated lowpass version. Both the methods selectively perform spatial-frequencies spectrum substitution from an image to another. In both schemes, context dependency is exploited by thresholding the local correlation coefficient between the images to be merged, to avoid injection of spatial details that are not likely to occur in the target image. Unlike other multiscale fusion schemes, both the present decompositions are not critically subsampled, thus avoiding possible impairments in the fused images, due to missing cancellation of aliasing terms. Results are presented and discussed on SPOT data.) <|cite_end|> <|cite_start|> (Reference: MTF-tailored multiscale fusion of high-resolution MS and pan imagery: This work presents a multiresolution framework for merging a multispectral image having an arbitrary number of bands with a higher-resolution panchromatic observation. The fusion method relies on the generalized Laplacian pyramid (GLP), which is a multiscale, oversampled structure. The goal is to selectively perform injection of spatial frequencies from an image to another with the constraint of thoroughly retaining the spectral information of the coarser data. The novel idea is that a model of the modulation transfer functions (MTF) of the multispectral scanner is exploited to design the GLP reduction filter. Thus, the interband structure model (IBSM), which is calculated at the coarser scale, where both MS and PAN data are available, can be extended to the finer scale, without the drawback of the poor enhancement occurring when MTFs are assumed to be ideal filters. Experiments carried out on QuickBird data demonstrate that a superior spatial enhancement, besides the spectral quality typical of injection methods, is achieved by means of the MTF-adjusted fusion.) <|cite_end|> <|cite_start|> (Reference: Fast and Efficient Panchromatic Sharpening: Certain sensors such as IKONOS produce panchromatic and multispectral (MS) images at different spatial resolutions. Several efforts have been made to increase the resolution of these MS images using panchromatic image information. In this paper, we present a fast and efficient panchromatic sharpening method that accurately estimates missing high-frequency components. We also use a postprocessing technique to correct color distortion. Experimental results show that the proposed method produced high-quality images and outperformed existing panchromatic sharpening methods in terms of objective quality measures such as universal image quality index, Q4, relative average spectral error, Erreur Relative Globale Adimensionnelle de SynthE¿se, and correlation.) <|cite_end|>, contourlet transform <|cite_start|> (Reference: An efficient pan-sharpening method via a combined adaptive-{{PCA}} approach and contourlets: High correlation among the neighboring pixels both spatially and spectrally in a multispectral image makes it necessary to use an efficient data transformation approach before performing pan-sharpening. Wavelets and principal component analysis (PCA) methods have been a popular choice for spatial and spectral transformations, respectively. Current PCA-based pan-sharpening methods make an assumption that the first principal component (PC) of high variance is an ideal choice for replacing or injecting it with high spatial details from the high-resolution histogram-matched panchromatic (PAN) image. This paper presents a combined adaptive PCA-contourlet approach for pan-sharpening, where the adaptive PCA is used to reduce the spectral distortion and the use of nonsubsampled contourlets for spatial transformation in pan-sharpening is incorporated to overcome the limitation of the wavelets in representing the directional information efficiently and capturing intrinsic geometrical structures of the objects. The efficiency of the presented method is tested by performing pan-sharpening of the high-resolution (IKONOS and QuickBird) and the medium-resolution (Landsat-7 Enhanced Thematic Mapper Plus) datasets. The evaluation of the pan-sharpened images using global validation indexes reveal that the adaptive PCA approach helps reducing the spectral distortion, and its merger with contourlets provides better fusion results.) <|cite_end|>, curvelet transform <|cite_start|> (Reference: Remote sensing image fusion using the curvelet transform: ) <|cite_end|>, discrete wavelet transform <|cite_start|> (Reference: A theory for multiresolution signal decomposition: the wavelet representation: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >) <|cite_end|> <|cite_start|> (Reference: The discrete wavelet transform: wedding the a trous and Mallat algorithms: Two separately motivated implementations of the wavelet transform are brought together. It is observed that these algorithms are both special cases of a single filter bank structure, the discrete wavelet transform, the behavior of which is governed by the choice of filters. In fact, the a trous algorithm is more properly viewed as a nonorthonormal multiresolution algorithm for which the discrete wavelet transform is exact. Moreover, it is shown that the commonly used Lagrange a trous filters are in one-to-one correspondence with the convolutional squares of the Daubechies filters for orthonormal wavelets of compact support. A systematic framework for the discrete wavelet transform is provided, and conditions are derived under which it computes the continuous wavelet transform exactly. Suitable filter constraints for finite energy and boundedness of the discrete transform are also derived. Relevant signal processing parameters are examined, and it is observed that orthonormality is balanced by restrictions on resolution. >) <|cite_end|> <|cite_start|> (Reference: Image merging and data fusion by means of the discrete two-dimensional wavelet transform: A new technique is developed for the merging and data fusion of two images. Two spatially registered images with differing spatial resolutions and color content are merged by combining multiresolution wavelet-decomposition components from each and then reconstructing the merged image by means of the inverse wavelet transform. The wavelet merger can employ a variety of wavelet bases, but in presentation of the concept, simple orthonormal sets—Haar and Daubechies wavelets—are explored. The wavelet technique is compared with the intensity–hue–saturation merging technique by means of multispectral and panchromatic test images. The results of the comparison show the wavelet merger performing better in combining and preserving spectral–spatial information for the test images.) <|cite_end|> <|cite_start|> (Reference: Multiresolution-based image fusion with additive wavelet decomposition: The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the "a trous" algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSAT/sup TM/ images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.) <|cite_end|> <|cite_start|> (Reference: {Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation: In various applications of remote sensing, when high spatial resolution is required in addition with classification results, sensorfusion is a solution. From a set of images with different spatial and spectral resolutions, the aim is to synthesize images with the highest spatial resolution available in the set and with an appropriate spectral content. Several sensor fusion methods exist; most of them improve the spatial resolution but provide poor quality of the spectral content of the resulting image. Based on a multiresolution modeling of the information, the ARsIs concept [from its French name "Am6lioration de la R6solution Spatiale par Injection de Structures") was designed with the aim of improving the spatial resolution together with a high quality in the spectral content ofthe synthesized images. The general case for the application of this concept is described. A quantitative comparison of all presented methods is achieved for a SPOT image. Another example of the fusion of SPOTXS (20-m) and KVR-1000 (2-m) images is given. Practical information for the implementation of the wavelet transform, the multiresolution analysis, and the ARSIS concept by practitioners is given with particular relevance to SPOT and Landsat imagew. lntroductlon) <|cite_end|> <|cite_start|> (Reference: Introduction of sensor spectral response into image fusion methods.
Application to wavelet-based methods: Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.) <|cite_end|> <|cite_start|> (Reference: Contrast and error-based fusion schemes for multispectral image pansharpening: The pansharpening process has the purpose of building a high-resolution multispectral image by fusing low spatial resolution multispectral and high-resolution panchromatic observations. A very credited method to pursue this goal relies upon the injection of details extracted from the panchromatic image into an upsampled version of the low-resolution multispectral image. In this letter, we compare two different injection methodologies and motivate the superiority of contrast-based methods both by physical consideration and by numerical tests carried out on remotely sensed data acquired by IKONOS and Quickbird sensors.) <|cite_end|>, high-pass filtering (HPF) <|cite_start|> (Reference: Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic: The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thermatic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results) <|cite_end|> <|cite_start|> (Reference: Fusion of multispectral and panchromatic images by local mean and variance matching filtering techniques: Since the advent of high spatial resolution satellite images, the merging of multiresolution images has been an important field of research. Many methods have been developed in the last few years producing good quality merged images characterised by a high spatial information content, but with significantly altered spectral information content. The merging method applied in this case study tends to preserve this spectral information by producing new channels highly correlated with the original ones. The method analyses local image statistics and then matches the local histograms of the two images to be merged by applying mean or mean-variance matching normalisation functions. In this article two different sets of panchromatic and multispectral images with resolution ratios of 4 and 5 are fused and the quality of the result in regard of spectral information preservation is assessed.) <|cite_end|> <|cite_start|> (Reference: Remote sensing: models and methods for image processing: The Nature of Remote Sensing: Introduction. Remote Sensing. Information Extraction from Remote-Sensing Images. Spectral Factors in Remote Sensing. Spectral Signatures. Remote-Sensing Systems. Optical Sensors. Temporal Characteristics. Image Display Systems. Data Systems. Summary. Exercises. References. Optical Radiation Models: Introduction. Visible to Short Wave Infrared Region. Solar Radiation. Radiation Components. Surface-Reflected. Unscattered Component. Surface-Reflected. Atmosphere-Scattered Component. Path-Scattered Component. Total At-Sensor. Solar Radiance. Image Examples in the Solar Region. Terrain Shading. Shadowing. Atmospheric Correction. Midwave to Thermal Infrared Region. Thermal Radiation. Radiation Components. Surface-Emitted Component. Surface-Reflected. Atmosphere-Emitted Component. Path-Emitted Component. Total At-Sensor. Emitted Radiance. Total Solar and Thermal Upwelling Radiance. Image Examples in the Thermal Region. Summary. Exercises. References. Sensor Models: Introduction. Overall Sensor Model. Resolution. The Instrument Response. Spatial Resolution. Spectral Resolution. Spectral Response. Spatial Response. Optical PSFopt. Image Motion PSFIM. Detector PSFdet. Electronics PSFel. Net PSFnet. Comparison of Sensor PSFs. PSF Summary for TM. Imaging System Simulation. Amplification. Sampling and Quantization. Simplified Sensor Model. Geometric Distortion. Orbit Models. Platform Attitude Models. Scanner Models. Earth Model. Line and Whiskbroom ScanGeometry. Pushbroom Scan Geometry. Topographic Distortion. Summary. Exercises. References. Data Models: Introduction. A Word on Notation. Univariate Image Statistics. Histogram. Normal Distribution. Cumulative Histogram. Statistical Parameters. Multivariate Image Statistics. Reduction to Univariate Statistics. Noise Models. Statistical Measures of Image Quality. Contrast. Modulation. Signal-to-Noise Ratio (SNR). Noise Equivalent Signal. Spatial Statistics. Visualization of Spatial Covariance. Covariance with Semivariogram. Separability and Anisotropy. Power Spectral Density. Co-occurrence Matrix. Fractal Geometry. Topographic and Sensor Effects. Topography and Spectral Statistics. Sensor Characteristics and Spectral Stastistics. Sensor Characteristics and Spectral Scattergrams. Summary. Exercises. References. Spectral Transforms: Introduction. Feature Space. Multispectral Ratios. Vegetation Indexes. Image Examples. Principal Components. Standardized Principal Components (SPC) Transform. Maximum Noise Fraction (MNF) Transform. Tasseled Cap Tranformation. Contrast Enhancement. Transformations Based on Global Statistics. Linear Transformations. Nonlinear Transformations. Normalization Stretch. Reference Stretch. Thresholding. Adaptive Transformation. Color Image Contrast Enhancement. Min-max Stretch. Normalization Stretch. Decorrelation Stretch. Color Spacer Transformations. Summary. Exercises. References. Spatial Transforms: Introduction. An Image Model for Spatial Filtering. Convolution Filters. Low Pass and High Pass Filters. High Boost Filters. Directional Filters. The Border Region. Characterization of Filtered Images. The Box Filter Algorithm. Cascaded Linear Filters. Statistical Filters. Gradient Filters. Fourier Synthesis. Discrete Fourier Transforms in 2-D. The Fourier Components. Filtering with the Fourier Transform. Transfer Functions. The Power Spectrum. Scale Space Transforms. Image Resolution Pyramids. Zero-Crossing Filters. Laplacian-of-Gaussian (LoG) Filters. Difference-of-Gaussians (DoG) Filters.Wavelet Transforms. Summary. Exercises. References. Correction and Calibration: Introduction. Noise Correction. Global Noise. Sigma Filter. Nagao-Matsuyama Filter. Local Noise. Periodic Noise. Distriping 359. Global,Linear Detector Matching. Nonlinear Detector Matching. Statistical Modification to Linear and Nonlinear Detector. Matching. Spatial Filtering Approaches. Radiometric Calibration. Sensor Calibration. Atmospheric Correction. Solar and Topographic Correction. Image Examples. Calibration and Normalization of Hyperspectral Imagery. AVIRIS Examples. Distortion Correction. Polynomial Distortion Models. Ground Control Points (GCPs). Coordinate Transformation. Map Projections. Resampling. Summary. Exercises References. Registration and Image Fusion: Introduction. What is Registration? Automated GCP Location. Area Correlation. Other Spatial Features. Orthrectification. Low-Resolution DEM. High-Resolution DEM. Hierarchical Warp Stereo. Multi-Image Fusion. Spatial Domain Fusion. High Frequency Modulation. Spectral Domain Fusion. Fusion Image Examples. Summary. Exercises. References. Thematic Classification: Introduction. The Importance of Image Scale. The Notion of Similarity. Hard Versus Soft Classification. Training the Classifier. Supervised Training. Unsupervised Training. K-Means Clustering Algorithm. Clustering Examples. Hybrid Supervised/Unsupervised Training. Non-Parametric Classification Algorithms. Level-Slice. Nearest-Mean. Artificial Neural Networks (ANNs). Back-Propagation Algorithm. Nonparametric Classification Examples. Parametric Classification Algorithms. Estimation of Model-Parameters. Discriminant Functions. The Normal Distribution Model. Relation to the Nearest-Mean Classifier. Supervised Classification Examples and Comparison to Nonparametric Classifiers. Segmentation. Region Growing. Region Labeling. Sub-Pixel Classification. The Linear Mixing Model. Unmixing Model. Hyperspectral Image Analysis. Visualization of the Image Cube. Feature Extraction. Image Residuals. Pre-Classification Processing and Feature Extraction. Classification Algorithms. Exercises. Error Analysis. Multitemporal Images. Summary. References. Index.) <|cite_end|> <|cite_start|> (Reference: Pansharpening Quality Assessment Using the Modulation Transfer Functions of Instruments: Quality assessment of pansharpening methods is not an easy task. Quality-assessment indexes, like Q4, spectral angle mapper, and relative global synthesis error, require a reference image at the same resolution as the fused image. In the absence of such a reference image, the quality of pansharpening is assessed at a degraded resolution only. The recently proposed index of Quality Not requiring a Reference (QNR) is one among very few tools available for assessing the quality of pansharpened images at the desired high resolution. However, it would be desirable to cross the outcomes of several independent quality-assessment indexes, in order to better determine the quality of pansharpened images. In this paper, we propose a method to assess fusion quality at the highest resolution, without requiring a high-resolution reference image. The novel method makes use of digital filters matching the modulation transfer functions (MTFs) of the imaging-instrument channels. Spectral quality is evaluated according to Wald's spectral consistency property. Spatial quality measures interscale changes by matching spatial details, extracted from the multispectral bands and from the panchromatic image by means of the high-pass complement of MTF filters. Eventually, we highlight the necessary and sufficient condition criteria for quality-assessment indexes by developing a pansharpening method optimizing the QNR spatial index and assessing the quality of fused images by using the proposed protocol.) <|cite_end|>, and high-pass modulation (HPM) <|cite_start|> (Reference: Smoothing filter-based intensity modulation:
A spectral preserve image fusion technique for improving
spatial details: Image fusion techniques are widely used to integrate a lower spatial resolution multispectral image with a higher spatial resolution panchromatic image, such as Thematic Mapper (TM) multispectral band and SPOT Panchromatic images. However, the existing techniques either cannot avoid distorting the image spectral properties or involve complicated and time-consuming frequency decomposition and re-construction processing. A simple spectral preserve fusion technique: the Smoothing Filter-based Intensity Modulation (SFIM) has thus been developed based on a simpli ed solar radiation and land surface re ection model. By using a ratio between a higher resolution image and its low pass ltered (with a smoothing lter) image, spatial details can be modulated to a co-registered lower resolution multispectral image without altering its spectral properties and contrast. The technique can be applied to improve spatial resolution for either colour composites or individual bands. The delity to spectral property and the spatial textural quality of SFIM are convincingly demonstrated by an image fusion experiment using TM and SPOT Panchromatic images of south-east Spain. The visual evaluation and statistical analysis compared with HSI and Brovey transform techniques con rmed that SFIM is a superior fusion technique for improving spatial detail of multispectral images with their spectral properties reliably preserved.) <|cite_end|> <|cite_start|> (Reference: Liu 'Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details': We discuss a critical point of the paper. It is shown on the one hand that the author of this paper has mistaken the change of spectral content with the change in spatial resolution, and on the other hand, that the protocol used to establish the advantages of his smoothing filter-based intensity modulation (SFIM) technique over other methods is not appropriate at all. Though the SFIM technique has its merits, the article does not demonstrate its expected qualities.) <|cite_end|> <|cite_start|> (Reference: Remote sensing: models and methods for image processing: The Nature of Remote Sensing: Introduction. Remote Sensing. Information Extraction from Remote-Sensing Images. Spectral Factors in Remote Sensing. Spectral Signatures. Remote-Sensing Systems. Optical Sensors. Temporal Characteristics. Image Display Systems. Data Systems. Summary. Exercises. References. Optical Radiation Models: Introduction. Visible to Short Wave Infrared Region. Solar Radiation. Radiation Components. Surface-Reflected. Unscattered Component. Surface-Reflected. Atmosphere-Scattered Component. Path-Scattered Component. Total At-Sensor. Solar Radiance. Image Examples in the Solar Region. Terrain Shading. Shadowing. Atmospheric Correction. Midwave to Thermal Infrared Region. Thermal Radiation. Radiation Components. Surface-Emitted Component. Surface-Reflected. Atmosphere-Emitted Component. Path-Emitted Component. Total At-Sensor. Emitted Radiance. Total Solar and Thermal Upwelling Radiance. Image Examples in the Thermal Region. Summary. Exercises. References. Sensor Models: Introduction. Overall Sensor Model. Resolution. The Instrument Response. Spatial Resolution. Spectral Resolution. Spectral Response. Spatial Response. Optical PSFopt. Image Motion PSFIM. Detector PSFdet. Electronics PSFel. Net PSFnet. Comparison of Sensor PSFs. PSF Summary for TM. Imaging System Simulation. Amplification. Sampling and Quantization. Simplified Sensor Model. Geometric Distortion. Orbit Models. Platform Attitude Models. Scanner Models. Earth Model. Line and Whiskbroom ScanGeometry. Pushbroom Scan Geometry. Topographic Distortion. Summary. Exercises. References. Data Models: Introduction. A Word on Notation. Univariate Image Statistics. Histogram. Normal Distribution. Cumulative Histogram. Statistical Parameters. Multivariate Image Statistics. Reduction to Univariate Statistics. Noise Models. Statistical Measures of Image Quality. Contrast. Modulation. Signal-to-Noise Ratio (SNR). Noise Equivalent Signal. Spatial Statistics. Visualization of Spatial Covariance. Covariance with Semivariogram. Separability and Anisotropy. Power Spectral Density. Co-occurrence Matrix. Fractal Geometry. Topographic and Sensor Effects. Topography and Spectral Statistics. Sensor Characteristics and Spectral Stastistics. Sensor Characteristics and Spectral Scattergrams. Summary. Exercises. References. Spectral Transforms: Introduction. Feature Space. Multispectral Ratios. Vegetation Indexes. Image Examples. Principal Components. Standardized Principal Components (SPC) Transform. Maximum Noise Fraction (MNF) Transform. Tasseled Cap Tranformation. Contrast Enhancement. Transformations Based on Global Statistics. Linear Transformations. Nonlinear Transformations. Normalization Stretch. Reference Stretch. Thresholding. Adaptive Transformation. Color Image Contrast Enhancement. Min-max Stretch. Normalization Stretch. Decorrelation Stretch. Color Spacer Transformations. Summary. Exercises. References. Spatial Transforms: Introduction. An Image Model for Spatial Filtering. Convolution Filters. Low Pass and High Pass Filters. High Boost Filters. Directional Filters. The Border Region. Characterization of Filtered Images. The Box Filter Algorithm. Cascaded Linear Filters. Statistical Filters. Gradient Filters. Fourier Synthesis. Discrete Fourier Transforms in 2-D. The Fourier Components. Filtering with the Fourier Transform. Transfer Functions. The Power Spectrum. Scale Space Transforms. Image Resolution Pyramids. Zero-Crossing Filters. Laplacian-of-Gaussian (LoG) Filters. Difference-of-Gaussians (DoG) Filters.Wavelet Transforms. Summary. Exercises. References. Correction and Calibration: Introduction. Noise Correction. Global Noise. Sigma Filter. Nagao-Matsuyama Filter. Local Noise. Periodic Noise. Distriping 359. Global,Linear Detector Matching. Nonlinear Detector Matching. Statistical Modification to Linear and Nonlinear Detector. Matching. Spatial Filtering Approaches. Radiometric Calibration. Sensor Calibration. Atmospheric Correction. Solar and Topographic Correction. Image Examples. Calibration and Normalization of Hyperspectral Imagery. AVIRIS Examples. Distortion Correction. Polynomial Distortion Models. Ground Control Points (GCPs). Coordinate Transformation. Map Projections. Resampling. Summary. Exercises References. Registration and Image Fusion: Introduction. What is Registration? Automated GCP Location. Area Correlation. Other Spatial Features. Orthrectification. Low-Resolution DEM. High-Resolution DEM. Hierarchical Warp Stereo. Multi-Image Fusion. Spatial Domain Fusion. High Frequency Modulation. Spectral Domain Fusion. Fusion Image Examples. Summary. Exercises. References. Thematic Classification: Introduction. The Importance of Image Scale. The Notion of Similarity. Hard Versus Soft Classification. Training the Classifier. Supervised Training. Unsupervised Training. K-Means Clustering Algorithm. Clustering Examples. Hybrid Supervised/Unsupervised Training. Non-Parametric Classification Algorithms. Level-Slice. Nearest-Mean. Artificial Neural Networks (ANNs). Back-Propagation Algorithm. Nonparametric Classification Examples. Parametric Classification Algorithms. Estimation of Model-Parameters. Discriminant Functions. The Normal Distribution Model. Relation to the Nearest-Mean Classifier. Supervised Classification Examples and Comparison to Nonparametric Classifiers. Segmentation. Region Growing. Region Labeling. Sub-Pixel Classification. The Linear Mixing Model. Unmixing Model. Hyperspectral Image Analysis. Visualization of the Image Cube. Feature Extraction. Image Residuals. Pre-Classification Processing and Feature Extraction. Classification Algorithms. Exercises. Error Analysis. Multitemporal Images. Summary. References. Index.) <|cite_end|> are most widely used.
The main challenging task of pansharpening techniques is to get a good compromise between spatial and spectral quality. The two classes of methods described above exhibit complementary spectral-spatial quality trade-off. Although CS family is usually characterized by a high fidelity in rendering the spatial details in the final product <|cite_start|> (Reference: {Improving component substitution pansharpening through multivariate regression of MS+Pan data: In this paper, multivariate regression is adopted to improve spectral quality, without diminishing spatial quality, in image fusion methods based on the well-established component substitution (CS) approach. A general scheme that is capable of modeling any CS image fusion method is presented and discussed. According to this scheme, a generalized intensity component is defined as the weighted average of the multispectral (MS) bands. The weights are obtained as regression coefficients between the MS bands and the spatially degraded panchromatic (Pan) image, with the aim of capturing the spectral responses of the sensors. Once it has been integrated into the Gram-Schmidt spectral-sharpening method, which is implemented in environment for visualizing images (ENVI) program, and into the generalized intensity-hue-saturation fusion method, the proposed preprocessing module allows the production of fused images of the same spatial sharpness but of increased spectral quality with respect to the standard implementations. In addition, quantitative scores carried out on spatially degraded data clearly confirm the superiority of the enhanced methods over their baselines.) <|cite_end|>, it often suffers from significant spectral distortion. This is due to the fact that the panchromatic image does not cover exactly the same wavelengths as the spectral sensors <|cite_start|> (Reference: Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics: Our framework is the synthesis of multispectral images (MS) at higher spatial resolution, which should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. This synthesis is performed with the help of a high spatial but low spectral resolution image: the panchromatic (Pan) image. The fusion of the Pan and MS images is classically referred as pan-sharpening. A fused product reaches good quality only if the characteristics and differences between input images are taken into account. Dissimilarities existing between these two data sets originate from two causes-different times and different spectral bands of acquisition. Remote sensing physics should be carefully considered while designing the fusion process. Because of the complexity of physics and the large number of unknowns, authors are led to make assumptions to drive their development. Weaknesses and strengths of each reported method are raised and confronted to these physical constraints. The conclusion of this critical survey of literature is that the choice in the assumptions for the development of a method is crucial, with the risk to drastically weaken fusion performance. It is also shown that the Amelioration de la Resolution Spatiale par Injection de Structures concept prevents from introducing spectral distortion into fused products and offers a reliable framework for further developments.) <|cite_end|> <|cite_start|> (Reference: A survey of classical methods and new trends in pansharpening of multispectral images: ) <|cite_end|> <|cite_start|> (Reference: A Critical Comparison Among Pansharpening Algorithms: Pansharpening aims at fusing a multispectral and a panchromatic image, featuring the result of the processing with the spectral resolution of the former and the spatial resolution of the latter. In the last decades, many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking, and standardized implementations of the algorithms makes a thorough evaluation and comparison of the different pansharpening techniques difficult to achieve. In this paper, the authors attempt to fill this gap by providing a critical description and extensive comparisons of some of the main state-of-the-art pansharpening methods. In greater details, several pansharpening algorithms belonging to the component substitution or multiresolution analysis families are considered. Such techniques are evaluated through the two main protocols for the assessment of pansharpening results, i.e., based on the full- and reduced-resolution validations. Five data sets acquired by different satellites allow for a detailed comparison of the algorithms, characterization of their performances with respect to the different instruments, and consistency of the two validation procedures. In addition, the implementation of all the pansharpening techniques considered in this paper and the framework used for running the simulations, comprising the two validation procedures and the main assessment indexes, are collected in a MATLAB toolbox that is made available to the community.) <|cite_end|>. On the contrary, MRA-based fusion aims at preserving the whole content of the low-resolution data and adding further information obtained from the panchromatic through spatial filtering <|cite_start|> (Reference: {Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation: In various applications of remote sensing, when high spatial resolution is required in addition with classification results, sensorfusion is a solution. From a set of images with different spatial and spectral resolutions, the aim is to synthesize images with the highest spatial resolution available in the set and with an appropriate spectral content. Several sensor fusion methods exist; most of them improve the spatial resolution but provide poor quality of the spectral content of the resulting image. Based on a multiresolution modeling of the information, the ARsIs concept [from its French name "Am6lioration de la R6solution Spatiale par Injection de Structures") was designed with the aim of improving the spatial resolution together with a high quality in the spectral content ofthe synthesized images. The general case for the application of this concept is described. A quantitative comparison of all presented methods is achieved for a SPOT image. Another example of the fusion of SPOTXS (20-m) and KVR-1000 (2-m) images is given. Practical information for the implementation of the wavelet transform, the multiresolution analysis, and the ARSIS concept by practitioners is given with particular relevance to SPOT and Landsat imagew. lntroductlon) <|cite_end|>. In contrast to CS, MRA family is more successful in spectral preservation but it often experiences spatial distortions like ringing or staircasing effects <|cite_start|> (Reference: Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics: Our framework is the synthesis of multispectral images (MS) at higher spatial resolution, which should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. This synthesis is performed with the help of a high spatial but low spectral resolution image: the panchromatic (Pan) image. The fusion of the Pan and MS images is classically referred as pan-sharpening. A fused product reaches good quality only if the characteristics and differences between input images are taken into account. Dissimilarities existing between these two data sets originate from two causes-different times and different spectral bands of acquisition. Remote sensing physics should be carefully considered while designing the fusion process. Because of the complexity of physics and the large number of unknowns, authors are led to make assumptions to drive their development. Weaknesses and strengths of each reported method are raised and confronted to these physical constraints. The conclusion of this critical survey of literature is that the choice in the assumptions for the development of a method is crucial, with the risk to drastically weaken fusion performance. It is also shown that the Amelioration de la Resolution Spatiale par Injection de Structures concept prevents from introducing spectral distortion into fused products and offers a reliable framework for further developments.) <|cite_end|> <|cite_start|> (Reference: A survey of classical methods and new trends in pansharpening of multispectral images: ) <|cite_end|> <|cite_start|> (Reference: A Critical Comparison Among Pansharpening Algorithms: Pansharpening aims at fusing a multispectral and a panchromatic image, featuring the result of the processing with the spectral resolution of the former and the spatial resolution of the latter. In the last decades, many algorithms addressing this task have been presented in the literature. However, the lack of universally recognized evaluation criteria, available image data sets for benchmarking, and standardized implementations of the algorithms makes a thorough evaluation and comparison of the different pansharpening techniques difficult to achieve. In this paper, the authors attempt to fill this gap by providing a critical description and extensive comparisons of some of the main state-of-the-art pansharpening methods. In greater details, several pansharpening algorithms belonging to the component substitution or multiresolution analysis families are considered. Such techniques are evaluated through the two main protocols for the assessment of pansharpening results, i.e., based on the full- and reduced-resolution validations. Five data sets acquired by different satellites allow for a detailed comparison of the algorithms, characterization of their performances with respect to the different instruments, and consistency of the two validation procedures. In addition, the implementation of all the pansharpening techniques considered in this paper and the framework used for running the simulations, comprising the two validation procedures and the main assessment indexes, are collected in a MATLAB toolbox that is made available to the community.) <|cite_end|>. However, as pointed out by Aiazzi {\it et al,} <|cite_start|> (Reference: MTF-tailored multiscale fusion of high-resolution MS and pan imagery: This work presents a multiresolution framework for merging a multispectral image having an arbitrary number of bands with a higher-resolution panchromatic observation. The fusion method relies on the generalized Laplacian pyramid (GLP), which is a multiscale, oversampled structure. The goal is to selectively perform injection of spatial frequencies from an image to another with the constraint of thoroughly retaining the spectral information of the coarser data. The novel idea is that a model of the modulation transfer functions (MTF) of the multispectral scanner is exploited to design the GLP reduction filter. Thus, the interband structure model (IBSM), which is calculated at the coarser scale, where both MS and PAN data are available, can be extended to the finer scale, without the drawback of the poor enhancement occurring when MTFs are assumed to be ideal filters. Experiments carried out on QuickBird data demonstrate that a superior spatial enhancement, besides the spectral quality typical of injection methods, is achieved by means of the MTF-adjusted fusion.) <|cite_end|>, if the frequency response of the low-pass filter used in the multiscale decompostion matches the Modulation Transfer Function (MTF) of the spectral channel into which details are injected, the spatial enhancement of MRA-based methods is comparable to that of CS.
Variational techniques have recently emerged as a promising direction of research since they effectively combine aspects of different methods into a single mathematical framework. Ballester {\it et al.} <|cite_start|> (Reference: A Variational Model for P+XS Image Fusion: ) <|cite_end|> were the first to introduce a variational formulation for pansharpening, which they called P+XS. The authors assumed that the low-resolution channels are formed from the underlying high-resolution ones by low-pass filtering followed by subsampling. They considered a regularization term forcing the edges of each spectral band to line up with those of the panchromatic. Furthermore, P+XS functional incorporated an additional term according to which the panchromatic is a linear combination of the spectral components which are to be computed. Duran {\it et al.} <|cite_start|> (Reference: A nonlocal variational model for pansharpening image fusion: Pansharpening refers to the fusion process of inferring a high-resolution multispectral image from a high-resolution panchromatic image and a low-resolution multispectral one. In this paper we propose a new variational method for pansharpening which incorporates a nonlocal regularization term and two fidelity terms, one describing the relation between the panchromatic image and the high-resolution spectral channels and the other one preserving the colors from the low-resolution modality. The nonlocal term is based on the image self-similarity principle applied to the panchromatic image. The existence and uniqueness of minimizer for the described functional is proved in a suitable space of weighted integrable functions. Although quite successful in terms of relative error, state-of-the-art pansharpening methods introduce relevant color artifacts. These spectral distortions can be significantly reduced by involving the image self-similarity. Extensive comparisons with state-of-the-art algorithms are performed.) <|cite_end|> proposed to keep the variational formulation introduced by Ballester {\it et al.} <|cite_start|> (Reference: A Variational Model for P+XS Image Fusion: ) <|cite_end|> while incorporating nonlocal regularization that takes advantage of image self-similarities and leds to a significant reduction of color artifacts. In this setting, the panchromatic image is used to derive relationships among patches describing the geometry of the desired fused image. The general idea of diffusing a color image conditionally to the geometry of any other, in particular, to the geometry of its associated grayscale intensity image, was originally proposed by Buades {\it et al.} <|cite_start|> (Reference: Conditional image diffusion: In this paper, a theoretical framework for the conditional diffusion of digital
images is presented. Different approaches have been proposed to solve this
problem by extrapolating the idea of the anisotropic diffusion for a grey level
images to vector-valued images. Then, the diffusion of each channel is
conditioned to a direction which normally takes into account information from
all channels. In our approach, the diffusion model assumes the a priori
knowledge of the diffusion direction during all the process.
The consistency of the model is shown by proving the existence and uniqueness
of solution for the proposed equation from the viscosity solutions theory. Also
a numerical scheme adapted to this equation based on the neighborhood filter is proposed.
Finally, we discuss several applications and we compare the corresponding
numerical schemes for the proposed model.) <|cite_end|>. Several other variational models have been proposed so far <|cite_start|> (Reference: A New Pan-Sharpening Method Using a Compressed Sensing Technique: This paper addresses the remote sensing image pan-sharpening problem from the perspective of compressed sensing (CS) theory which ensures that with the sparsity regularization, a compressible signal can be correctly recovered from the global linear sampled data. First, the degradation model from a high- to low-resolution multispectral (MS) image and high-resolution panchromatic (PAN) image is constructed as a linear sampling process which is formulated as a matrix. Then, the model matrix is considered as the measurement matrix in CS, so pan-sharpening is converted into signal restoration problem with sparsity regularization. Finally, the basis pursuit (BP) algorithm is used to resolve the restoration problem, which can recover the high-resolution MS image effectively. The QuickBird and IKONOS satellite images are used to test the proposed method. The experimental results show that the proposed method can well preserve spectral and spatial details of the source images. The pan-sharpened high-resolution MS image by the proposed method is competitive or even superior to those images fused by other well-known methods.) <|cite_end|> <|cite_start|> (Reference: Pansharpening using total variation regularization: In remote sensing, pansharpening refers to the technique that combines the complementary spectral and spatial resolution characteristics of a multispectral image and a panchromatic image, with the objective to generate a high-resolution color image. This paper presents a new pansharpening method based on the minimization of a variant of total variation. We consider the fusion problem as the colorization of each pixel in the panchromatic image. A new term concerning the gradient of the panchromatic image is introduced in the functional of total variation so as to preserve edges. Experimental results on IKONOS satellite images demonstrate the effectiveness of the proposed method.) <|cite_end|> <|cite_start|> (Reference: A variational approach for sharpening high dimensional images: Earth-observing satellites usually not only take ordinary red-green-blue images but also provide several images including the near-infrared and infrared spectrum. These images are called multispectral, for about four to seven different bands, or hyperspectral, for higher dimensional images of up to 210 bands. The drawback of the additional spectral information is that each spectral band has rather low spatial resolution. In this paper we propose a new variational method for sharpening high dimensional spectral images with the help of a high resolution gray-scale image while preserving the spectral characteristics used for classification and identification tasks. We describe the application of split Bregman minimization to our energy, prove convergence speed, and compare the split Bregman method to a descent method based on the ideas of alternating directions minimization. Finally, we show results on Quickbird multispectral as well as on AVIRIS hyperspectral data.) <|cite_end|> <|cite_start|> (Reference: A new pansharpening method using an explicit image formation model regularized via Total Variation: In this paper we present a new method for the pansharpening of multi-spectral satellite imagery. This method is based on a simple explicit image formation model which leads to an ill posed problem that needs to be regularized for best results. We use both Tikhonov (ridge regression) and Total Variation (TV) regularization. We develop the solutions to these two problems and then we address the problem of selecting the optimal regularization parameter λ. We find the value of λ that minimizes Stein's unbiased risk estimate (SURE). For ridge regression this leads to an analytical expression for SURE while for the TV regularized solution we use Monte Carlo SURE where the estimate is obtained by stochastic means. Finally, we present experiment results where we use quality metrics to evaluate the spectral and spatial quality of the resulting pansharpened image.) <|cite_end|> <|cite_start|> (Reference: A Sparse Image Fusion Algorithm With Application to Pan-Sharpening: Data provided by most optical Earth observation satellites such as IKONOS, QuickBird, and GeoEye are composed of a panchromatic channel of high spatial resolution (HR) and several multispectral channels at a lower spatial resolution (LR). The fusion of an HR panchromatic and the corresponding LR spectral channels is called “pan-sharpening.” It aims at obtaining an HR multispectral image. In this paper, we propose a new pan-sharpening method named Sparse F usion of Images (SparseFI, pronounced as “sparsify”). SparseFI is based on the compressive sensing theory and explores the sparse representation of HR/LR multispectral image patches in the dictionary pairs cotrained from the panchromatic image and its downsampled LR version. Compared with conventional methods, it “learns” from, i.e., adapts itself to, the data and has generally better performance than existing methods. Due to the fact that the SparseFI method does not assume any spectral composition model of the panchromatic image and due to the super-resolution capability and robustness of sparse signal reconstruction algorithms, it gives higher spatial resolution and, in most cases, less spectral distortion compared with the conventional methods.) <|cite_end|> <|cite_start|> (Reference: A Regularized Model-Based Optimization Framework for Pan-Sharpening: Pan-sharpening is a common postprocessing operation for captured multispectral satellite imagery, where the spatial resolution of images gathered in various spectral bands is enhanced by fusing them with a panchromatic image captured at a higher resolution. In this paper, pan-sharpening is formulated as the problem of jointly estimating the high-resolution (HR) multispectral images to minimize an objective function comprised of the sum of squared residual errors in physically motivated observation models of the low-resolution (LR) multispectral and the HR panchromatic images and a correlation-dependent regularization term. The objective function differs from and improves upon previously reported model-based optimization approaches to pan-sharpening in two major aspects: 1) a new regularization term is introduced and 2) a highpass filter, complementary to the lowpass filter for the LR spectral observations, is introduced for the residual error corresponding to the panchromatic observation model. To obtain pan-sharpened images, an iterative algorithm is developed to solve the proposed joint minimization. The proposed algorithm is compared with previously proposed methods both visually and using established quantitative measures of SNR, spectral angle mapper, relative dimensionless global error in synthesis, Q, and Q4 indices. Both the quantitative results and visual evaluation demonstrate that the proposed joint formulation provides superior results compared with pre-existing methods. A software implementation is provided.) <|cite_end|> <|cite_start|> (Reference: A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors: The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: 1) a low spatial resolution multispectral image and 2) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain, we encourage low-rank structure, whereas in the spatial domain, we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and fused multispectral images. A weighted version of the vector total variation norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low-resolution multispectral images by linear regression while the second one employs the principal component pursuit to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed split augmented Lagrangian shrinkage algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared with the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Pan-sharpening of multi-spectral images using a new variational model: In remote-sensing image processing, pan-sharpening is used to obtain a high-resolution multi-spectral image by combining a low-resolution multi-spectral image with a corresponding high-resolution panchromatic image. In this article, to preserve the geometry, spectrum, and correlation information of the original images, three hypotheses are presented, i.e. (1) the geometry information contained in the pan-sharpened image should also be contained in the panchromatic bands; (2) the upsampled multi-spectral image can be seen as a blurred form of the fused image with an unknown kernel; and (3) the fused bands should keep the correlation between each band of the upsampled multi-spectral image. A variational energy functional is then built based on the assumptions, of which the minimizer is the target fused image. The existence of a minimizer of the proposed energy is further analysed, and the numerical scheme based on the split Bregman framework is presented. To verify the validity, the new proposed method is compared with several state-of-the-art techniques using QuickBird data in subjective, objective, and efficiency aspects. The results show that the proposed approach performs better than some compared methods according to the performance metrics.) <|cite_end|>. A detailed overview of variational techniques is given in Section \ref{sec:variational}.
Most of the pansharpening techniques previously mentioned make use of the linear combination assumption and need all data to be geometrically aligned. Unfortunately, both requirements are not satisfied by real satellite imagery, for which different spectral bands are not originally co-registered and their registration previously to pansharpening is not at all recommendable because of the strong aliasing. Indeed, the panchromatic and spectral bands are acquired according to the Push-Broom principle of CCD arrays placed in the focal plane of a telescope. The sensors are shifted within the focal plane in the direction of the satellite scrolling and the same point on the ground is not captured at the same time by all sensors or strictly under the same angle. Furthermore, one of the most relevant drawbacks of this acquisition system is the strong aliasing of the spectral bands, which usually produces jagged edges, color distortions, and stair-step effects. The MTF has low values near Nyquist for the panchromatic, thus almost avoiding undesirable aliasing effects. On the contrary, the MTF of the spectral bands having high values at Nyquist results in aliased spectral data as illustrated in Figure \ref{fig:satellite_aliasing}. Baronti {\it et al.} <|cite_start|> (Reference: A theoretical analysis of the effects of aliasing and misregistration on pansharpened imagery: In this paper, the characteristics of multispectral (MS) and panchromatic (P) image fusion methods are investigated. Depending on the way spatial details are extracted from P, pansharpening methods can be broadly labeled into two main classes, corresponding to methods based on either component substitution (CS) or multiresolution analysis (MRA). Theoretical investigations and experimental results evidence that CS-based fusion is far less sensitive than MRA-based fusion to: 1) registration errors, i.e., spatial misalignments between MS and P images, possibly originated by cartographic projection and resampling of individual data sets; 2) aliasing occurring in MS bands and stemming from modulation transfer functions (MTF) of MS channels that are excessively broad for the sampling step. In order to assess the sensitiveness of methods, aliasing is simulated at degraded spatial scale by means of several MTF-shaped digital filters. Analogously, simulated misalignments, carried out at both full and degraded scale, evidence the quality-shift tradeoff of the two classes. MRA yields a slightly superior quality in the absence of aliasing/misalignments, but is more penalized than CS, whenever either aliasing or shifts between MS and P occur. Conversely, CS generally produces a slightly lower quality, but is intrinsically more aliasing/shift tolerant.) <|cite_end|> studied how several pansharpening methods proposed in the literature behave in the presence of misregistration and aliasing. Under general and likely assumptions, the authors proved that CS is less sensitive than MRA to these drawbacks whenever being of moderate extent.
\begin{figure}[!t]
\centering
\begin{tabular}{cc}
\includegraphics[trim= 5cm 34cm 55cm 26cm, clip=true, width=0.4\textwidth]{pleiades_bicubic.png}
\includegraphics[trim= 20cm 47cm 41cm 14cm, clip=true, width=0.4\textwidth]{pleiades_bicubic.png}
\end{tabular}
\caption{Upsampled spectral data, extracted from the same scene as in Figure \ref{fig:datasetPleiades}, where all bands have been registered into a common geometry. Note that strong aliasing is apparent in both images.}
\label{fig:satellite_aliasing}
\end{figure}
In this paper, we propose a new nonlocal variational model for the pansharpening of real satellite images. Compared to the previous work <|cite_start|> (Reference: A nonlocal variational model for pansharpening image fusion: Pansharpening refers to the fusion process of inferring a high-resolution multispectral image from a high-resolution panchromatic image and a low-resolution multispectral one. In this paper we propose a new variational method for pansharpening which incorporates a nonlocal regularization term and two fidelity terms, one describing the relation between the panchromatic image and the high-resolution spectral channels and the other one preserving the colors from the low-resolution modality. The nonlocal term is based on the image self-similarity principle applied to the panchromatic image. The existence and uniqueness of minimizer for the described functional is proved in a suitable space of weighted integrable functions. Although quite successful in terms of relative error, state-of-the-art pansharpening methods introduce relevant color artifacts. These spectral distortions can be significantly reduced by involving the image self-similarity. Extensive comparisons with state-of-the-art algorithms are performed.) <|cite_end|>, no assumption on the co-registration of spectral data is made. Furthermore, a new constraint imposing the preservation of the radiometric ratio between the panchromatic and each spectral band is introduced, replacing the classical linearity assumption. In practice, this energy term injects the high frequencies of the panchromatic into each high-resolution spectral component one seeks to estimate. The energy minimization can be performed independently for each channel, thus permitting the independent optimization of each spectral band and its application to misregistered and aliased spectral data. Being the functional strictly convex and quadratic, we design an efficient numerical scheme based on the gradient descent method.
The rest of the paper is organized as follows. Section \ref{sec:variational} introduces the variational formulation of pansharpening. We propose in Section \ref{sec:newmodel} a new nonlocal band-decoupled variational model that allows us to deal with misregistered and aliased spectral data. We also give detailed explanations on how to compute numerically the minimizer of the energy functional. Section \ref{sec:discussion} discusses the choices made in the design of the model, with especial attention to the validity of the linear combination and co-registration assumptions on real satellite imagery. We perform an exhaustive comparison between the proposed model and the most relevant classical and state-of-the-art pansharpening techniques in Section \ref{sec:experiments}, followed by the conclusions in Section \ref{sec:conclusions}. <|paper_end|> | [
"<|reference_start|> Structuring contemporary remote sensing image fusion: The exploitation of multi-sensor images at pixel level is a widely implemented research field in Earth observation. In this context, image fusion plays an important role since it effectively combines complementary image content to enhance information contained in the individual datasets. This article presents an overview of the existing fusion techniques and their achievements for Earth scientists. This research started off with the compilation of a database on remote sensing image fusion journal publications. Research results were exploited, grouping the literature into different aspects of relevance. Six categories of information have been built according to the journal, the application, sensors that provided the images used in the case study, applied fusion techniques, areas of achievement, and on-going research highlighting unresolved questions and current science. This resulted in an overview on the categorisation of image fusion techniques, explanation of the various approaches used within a certain category, and description of particularities when dealing with the fusion of optical and radar imagery. Even though many researchers intend to find the best algorithm, there is a greater need to define an appropriate workflow prior to processing the imagery with the knowledge in all related fields, that is, remote sensing image fusion and the desired application to address the different aspects of error propagation. <|reference_end|>",
"<|reference_start|> A new look at IHS-like image fusion methods: <|reference_end|>",
"<|reference_start|> Fast and Efficient Panchromatic Sharpening: Certain sensors such as IKONOS produce panchromatic and multispectral (MS) images at different spatial resolutions. Several efforts have been made to increase the resolution of these MS images using panchromatic image information. In this paper, we present a fast and efficient panchromatic sharpening method that accurately estimates missing high-frequency components. We also use a postprocessing technique to correct color distortion. Experimental results show that the proposed method produced high-quality images and outperformed existing panchromatic sharpening methods in terms of objective quality measures such as universal image quality index, Q4, relative average spectral error, Erreur Relative Globale Adimensionnelle de SynthE¿se, and correlation. <|reference_end|>",
"<|reference_start|> {Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation: In various applications of remote sensing, when high spatial resolution is required in addition with classification results, sensorfusion is a solution. From a set of images with different spatial and spectral resolutions, the aim is to synthesize images with the highest spatial resolution available in the set and with an appropriate spectral content. Several sensor fusion methods exist; most of them improve the spatial resolution but provide poor quality of the spectral content of the resulting image. Based on a multiresolution modeling of the information, the ARsIs concept [from its French name \"Am6lioration de la R6solution Spatiale par Injection de Structures\") was designed with the aim of improving the spatial resolution together with a high quality in the spectral content ofthe synthesized images. The general case for the application of this concept is described. A quantitative comparison of all presented methods is achieved for a SPOT image. Another example of the fusion of SPOTXS (20-m) and KVR-1000 (2-m) images is given. Practical information for the implementation of the wavelet transform, the multiresolution analysis, and the ARSIS concept by practitioners is given with particular relevance to SPOT and Landsat imagew. lntroductlon <|reference_end|>"
] | [
1,
4,
15,
36
] | {"<|multi_cite_1_1|>": "ss-1964800", "<|multi_cite_1_2|>": "ss-1997559", "<|multi_cite_1_3|>": "ss-1269543", "<|multi_cite_2_1|>": "ss-1997560", "<|multi_cite_2_2|>": "ss-2278329", "<|multi_cite_2_3|>": "ss-1572408", "<|multi_cite_3_1|>": "ss-905513", "<|multi_cite_3_2|>": "ss-1447607", "<|multi_cite_4_2|>": "ss-695683", "<|multi_cite_5_1|>": "ss-1997561", "<|multi_cite_5_2|>": "ss-1331366", "<|cite_6|>": "ss-1674830", "<|cite_7|>": "ss-1834290", "<|multi_cite_8_1|>": "ss-1678866", "<|multi_cite_8_2|>": "ss-867668", "<|multi_cite_8_3|>": "ss-1834299", "<|cite_9|>": "ss-905512", "<|cite_10|>": "ss-2550516", "<|multi_cite_11_1|>": "ss-1278574", "<|multi_cite_11_2|>": "ss-854016", "<|multi_cite_11_3|>": "ss-1997562", "<|multi_cite_11_4|>": "ss-2278330", "<|multi_cite_11_5|>": "ss-904925", "<|multi_cite_11_6|>": "ss-1269545", "<|multi_cite_11_7|>": "ss-2490654", "<|multi_cite_12_1|>": "ss-1447607", "<|multi_cite_12_2|>": "ss-1997563", "<|multi_cite_12_3|>": "ss-766239", "<|multi_cite_12_4|>": "ss-712160", "<|multi_cite_13_1|>": "ss-1618811", "<|multi_cite_13_2|>": "ss-1997564", "<|multi_cite_13_3|>": "ss-766239", "<|cite_14|>": "ss-695683", "<|multi_cite_15_1|>": "ss-1964800", "<|multi_cite_15_2|>": "ss-2466688", "<|multi_cite_15_3|>": "ss-1269543", "<|cite_16|>": "ss-904925", "<|multi_cite_17_1|>": "ss-1964800", "<|multi_cite_17_2|>": "ss-2466688", "<|multi_cite_17_3|>": "ss-1269543", "<|cite_18|>": "ss-867668", "<|cite_19|>": "ss-1150634", "<|cite_20|>": "ss-2490655", "<|cite_21|>": "ss-1150634", "<|cite_22|>": "ss-1997565", "<|multi_cite_23_1|>": "ss-1331371", "<|multi_cite_23_2|>": "ss-1997566", "<|multi_cite_23_3|>": "ss-1435083", "<|multi_cite_23_4|>": "ss-1997567", "<|multi_cite_23_5|>": "ss-1331373", "<|multi_cite_23_6|>": "ss-1393041", "<|multi_cite_23_7|>": "ss-1508985", "<|multi_cite_23_8|>": "ss-1997568", "<|cite_24|>": "ss-1997569", "<|cite_25|>": "ss-2490655"} |
2407.08059 | <|paper_start|> Title: Analysis of extremum seeking control for wind turbine torque controller optimization by aerodynamic and generator power objectives
Abstract: Analysis of extremum seeking control for wind turbine torque controller optimization by aerodynamic and generator power objectives: Wind turbines degrade over time, resulting in varying structural, aeroelastic, and aerodynamic properties. In contrast, the turbine controller calibrations generally remain constant, leading to suboptimal performance and potential stability issues. The calibration of wind turbine controller parameters is therefore of high interest. To this end, several adaptive control schemes based on extremum seeking control (ESC) have been proposed in the literature. These schemes have been successfully employed to maximize turbine power capture by optimization of the $K\omega^2$-type torque controller. In practice, ESC is performed using electrical generator power, which is easily obtained. This paper analyses the feasibility of torque gain optimization using aerodynamic and generator powers. It is shown that, unlike aerodynamic power, using the generator power objective limits the dither frequency to lower values, reducing the convergence rate unless the phase of the demodulation ESC signal is properly adjusted. A frequency-domain analysis of both systems shows distinct phase behavior, impacting ESC performance. A solution is proposed by constructing an objective measure based on an estimate of the aerodynamic power.
Introduction
\noindent The increasing need to accelerate the global transition toward renewable energy sources motivates the development of ever-larger wind turbines with increased power capacities. The upscaling of wind turbines leads to a taller support structure and increased rotor sizes with higher flexibility, bringing challenges for structural and controller design.
Conventional and advanced turbine controller strategies often rely on model accuracy due to the limited measurements available to a turbine controller. As shown in <|cite_start|> (Reference: {On the ill-conditioning of the combined wind speed estimator and tip-speed ratio tracking control scheme: In recent years, industrial controllers for modern wind turbines have been designed as a combined wind speed estimator and tip-speed ratio (WSE-TSR) tracking control scheme. In contrast to the conventional and widely used Kω 2 torque control strategy, the WSE-TSR scheme provides flexibility in terms of controller responsiveness and potentially improves power extraction performance. However, both control schemes heavily rely on prior information about the aerodynamic properties of the turbine rotor. Using a control-oriented linear analysis framework, this paper shows that the WSE-TSR scheme is inherently ill-conditioned. The ill-conditioning is defined as the inability of the scheme to uniquely determine the wind speed from the product with other model parameters in the power balance equation. Uncertainty of the power coefficient contribution in the latter mentioned product inevitably leads to a biased effective wind speed estimate. As a consequence, in the presence of uncertainty, the real-world wind turbine deviates from the intended optimal operating point, while the controller believes that the turbine operates at the desired set-point. Simulation results confirm that inaccurate model parameters lead to biased estimates of the actual turbine operating point, causing sub-optimal power extraction efficiency.) <|cite_end|>, model parameter inaccuracies result in turbine operation away from the intended operating point, potentially causing suboptimal operational behavior and possibly leading to stability issues. Such discrepancies can become more significant over time due to aerodynamic degradation of the rotor by, e.g., wear and tear, bug build-up, and icing <|cite_start|> (Reference: Control of variable-speed wind turbines: standard and adaptive techniques for maximizing energy capture: This article considers an adaptive control scheme previously developed for region 2 control of a variable speed wind turbine. In this paper, the question of theoretical stability of the torque controller is addressed, showing that the rotor speed is asymptotically stable under the torque control law in the constant wind speed input case and L/sub 2/ stable with respect to time-varying wind input. Further, a method is derived for selecting /spl gamma//sub /spl Delta/M/ in the gain adaptation law to guarantee convergence of the adaptive gain M to its optimal value M*.) <|cite_end|>. As often the relation between turbine degradation and physical turbine properties is a priori unknown, learning schemes capable of online tuning of control systems, through the calibration of internal (physical) model information, are currently of high interest <|cite_start|> (Reference: Proceedings of the American control conference (ACC): ) <|cite_end|> <|cite_start|> (Reference: IFAC World Congress 2023: Fault Detection (FD) and Fault Tolerant Control (FTC) are important topics in the aerospace industry as well as in academia, and thus many research projects and programmes have been conducted in the last two decades. To promote this movement, an open invited track related to FD and FTC focusing on FD and FTC in flight control of civil aircraft is proposed for the IFAC World Congress 2023. This open invited track is for a competition to use FD and FTC techniques with a benchmark problem, similar to the “Aerospace Industrial Benchmark on Fault Detection" competition at the IFAC World Congress 2020 in Germany.) <|cite_end|>.
For present-day multi-megawatt turbines, the conventional and (relatively) straightforward $K\omega^2$ ({"K-omega-squared"}) torque control strategy still shows good performance regarding power extraction for modern large-scale wind turbines. Although the performance of this controller type highly depends on the quality of the model information it is based on, the single torque gain structure allows for convenient direct optimization. In the past decade, numerous works have been published on this aspect, proposing extremum seeking control (ESC) as a viable candidate for real-world and real-time controller optimization.
ESC is an adaptive control algorithm that optimizes steady-state input-output mappings of (dynamic) systems that possess (local) optima <|cite_start|> (Reference: {Real-time optimization by extremum-seeking control: From the Publisher:
Extremum seeking is a method of adaptive control outside the classical paradigm or model reference and related schemes. The unique feature of extremum seeking as an optimization tool is that is is an on-line methodology, performed by feedback, and with the ability to achieve rapid convergence. A second distinction between classical adaptive control and extremum seeking is that the latter is not model based. Its non-model based character explains the remarkable popularity of extremum seeking in the last half decade: the recent applications in fluid flow combustion, and biomedical systems are all characterized by complex, unreliable models. Written by authorities in the field and pioneers in adaptive nonlinear control systems, this book presents both significant theoretic value and important practical potential.) <|cite_end|> <|cite_start|> (Reference: Proceedings of the American control conference (ACC): ) <|cite_end|>. The ESC algorithm is model-free, does not require prior system knowledge, and is based on the notion of time-scale separation. The algorithm extracts information on the gradient of some measurable objective cost to the optimized variable by periodically exciting the system through a dither signal and subsequently manipulating output signals. The algorithm can be applied to optimize time-invariant or slowly time-varying systems.
ESC for wind turbine controller optimization was initially employed to optimize the constant pitch angle for turbine energy capture maximization <|cite_start|> (Reference: American Control Conference: ) <|cite_end|>. A similar implementation was later evaluated on a laboratory-scale two-bladed micro wind turbine. The scheme was extended in a multivariable setting by optimizing the combined torque and pitch to maximize energy capture <|cite_start|> (Reference: Maximizing Wind Turbine Energy Capture Using Multivariable Extremum Seeking Control: Maximizing energy capture has become an important issue as more turbines are installed in low wind areas. This paper investigates the application of extremum seeking control (ESC) to maximizing the energy capture of variable speed wind turbines. The optimal control torque and pitch angle are searched via ESC based on the measurement of the rotor power. The advantage of this method is the independency from accurate turbine modelling and wind measurement. Simulation was conducted on FAST for a wind turbine dynamic model, under smooth, turbulent and field recorded wind profiles. The simulation results demonstrated significant improvement in energy capture compared to the standard control with fixed reference. An anti-windup ESC was applied to overcome the integral windup due to actuator saturation which would otherwise disable the ESC process. Finally, the integrator and high-pass filter resetting schemes were applied to improve the transient under the abrupt changes of wind.) <|cite_end|>, the effectiveness of which was later evaluated on a simulation model of the National Renewable Energy Laboratory (NREL) Controls Advanced Research Turbine (CART3) turbine <|cite_start|> (Reference: Experimental evaluation of extremum seeking based region-2 controller for CART3 wind turbine: ) <|cite_end|> and in field-test on the actual CART3 turbine. Further works improved the ESC algorithm's convergence, making it uncorrelated to the mean wind speed by exploiting the logarithm of the power signal as objective <|cite_start|> (Reference: Logarithmic Power Feedback for Extremum Seeking Control of Wind Turbines: ) <|cite_end|>. This latter improvement was validated with full large eddy simulations in <|cite_start|> (Reference: Evaluation of log‐of‐power extremum seeking control for wind turbines using large eddy simulations: ) <|cite_end|>. Later in <|cite_start|> (Reference: Wind Turbine Power Maximization Using Log-Power Proportional-Integral Extremum Seeking: This paper proposes a Log-Power Proportional-Integral Extremum Seeking Control (LP-PIESC) framework for maximizing the power capture of a wind turbine operating at below-rated wind speeds, i.e., the so-called region-2 of a turbine’s power curve. Extremum seeking control (ESC) has emerged as a viable algorithm to maximize energy capture for a wind turbine operating in region-2. Despite the encouraging results of early ESC strategies, the basic algorithm suffers from slow and inconsistent convergence behavior under changing wind speed within region-2. It has been shown that replacing the power signal with its logarithm results in an algorithm that is robust and predictable even when the mean wind speed varies. In addition, new studies have suggested that replacing conventional ESC with proportional plus integral ESC (PIESC) results in faster convergence to optimal conditions. In the current paper, the idea of log-power feedback is merged with the PIESC scheme and is applied to tune the parameters of the region-2 torque controller for the NREL 5-MW turbine reference model. The results of this new algorithm are compared with the ESC with log-of-power feedback using NREL OpenFAST simulations. The log-power feedback PIESC is also implemented for the blade pitch set-point angle. Energy capture over the course of the simulations and damage equivalent loads calculated with MLife are used to assess the results. The simulations performed under different turbulent intensity cases demonstrate the rapid convergence of the log-power feedback PIESC.) <|cite_end|>, the convergence of this algorithm was accelerated at the expense of introducing additional tuning parameters.
Industrial turbines seldom have the means to measure the aerodynamic torque or power directly; nevertheless, wind turbine generator power is accurately measured. Assuming ideal drivetrain efficiency, aerodynamic and generator power are equal in steady state. Thus, the generator power measurement is a natural optimization objective candidate. However, from exploratory numerical simulations, it was observed by the authors of this paper that the adoption of conventional ESC based on generator power-based optimization posed challenges. Its use might still be feasible with lower dither frequencies but with increased complexity in calibration and potential increased convergence times. The underlying cause of this phenomenon has never been described in the open literature and is the motivation for the analysis presented in this paper.
This paper analyses the dynamic properties of the system that is subject to dither-demodulation ESC with torque gain as optimization input and measured aerodynamic and generator power signals as objective output. It has been found that the system dynamics are different. These dynamic differences impact the application of a dither-demodulation ESC with measured generator power as an optimization objective. This paper thereby presents the following contributions:
\begin{enumerate}
\item Describing the implications when implementing generator power-based ESC by simulations.
\item Providing a dynamical analysis for both aerodynamic power and generator power as ESC objectives.
\item Proposing a solution that improves ESC convergence by formulating a new estimated aerodynamic power objective based on augmented measured generator power with rotor acceleration dynamics.
\end{enumerate}
In this paper, the issue is presented and an intuitive explanation of its cause is provided. Specifically, a dynamic analysis based on frequency-domain analysis via linearizations is proposed.
The paper is organized as follows. Sections~\ref{sec:T}~and~\ref{sec:ESC} establish the wind turbine model and the employed ESC scheme. Section~\ref{sec:PI} describes torque gain optimization based on both power objectives. Section~\ref{sec:DA} presents a dynamical analysis of the systems considered for operating points around the optimal torque gain value. Section~\ref{sec:S} proposes a solution for faster ESC optimization by estimating the aerodynamic power with an augmented generator power objective. Finally, conclusions are drawn in Section~\ref{sec:C}. <|paper_end|> | [
"<|reference_start|> Control of variable-speed wind turbines: standard and adaptive techniques for maximizing energy capture: This article considers an adaptive control scheme previously developed for region 2 control of a variable speed wind turbine. In this paper, the question of theoretical stability of the torque controller is addressed, showing that the rotor speed is asymptotically stable under the torque control law in the constant wind speed input case and L/sub 2/ stable with respect to time-varying wind input. Further, a method is derived for selecting /spl gamma//sub /spl Delta/M/ in the gain adaptation law to guarantee convergence of the adaptive gain M to its optimal value M*. <|reference_end|>",
"<|reference_start|> Proceedings of the American control conference (ACC): <|reference_end|>",
"<|reference_start|> {Real-time optimization by extremum-seeking control: From the Publisher: \nExtremum seeking is a method of adaptive control outside the classical paradigm or model reference and related schemes. The unique feature of extremum seeking as an optimization tool is that is is an on-line methodology, performed by feedback, and with the ability to achieve rapid convergence. A second distinction between classical adaptive control and extremum seeking is that the latter is not model based. Its non-model based character explains the remarkable popularity of extremum seeking in the last half decade: the recent applications in fluid flow combustion, and biomedical systems are all characterized by complex, unreliable models. Written by authorities in the field and pioneers in adaptive nonlinear control systems, this book presents both significant theoretic value and important practical potential. <|reference_end|>",
"<|reference_start|> Evaluation of log‐of‐power extremum seeking control for wind turbines using large eddy simulations: <|reference_end|>"
] | [
1,
2,
4,
10
] | {"<|cite_1|>": "ss-2096548", "<|cite_2|>": "ss-2096549", "<|multi_cite_3_1|>": "ss-883245", "<|multi_cite_3_2|>": "ss-1518993", "<|multi_cite_5_1|>": "ss-789326", "<|multi_cite_5_2|>": "ss-883245", "<|cite_6|>": "ss-1941112", "<|cite_8|>": "ss-724731", "<|cite_9|>": "ss-2096550", "<|cite_11|>": "ss-2096551", "<|cite_12|>": "ss-2096552", "<|cite_13|>": "ss-2096553"} |
2104.08510 | <|paper_start|> Title: Exploring Deep Learning for Joint Audio-Visual Lip Biometrics
Abstract: Exploring Deep Learning for Joint Audio-Visual Lip Biometrics: Audio-visual (AV) lip biometrics is a promising authentication technique that leverages the benefits of both the audio and visual modalities in speech communication. Previous works have demonstrated the usefulness of AV lip biometrics. However, the lack of a sizeable AV database hinders the exploration of deep-learning-based audio-visual lip biometrics. To address this problem, we compile a moderate-size database using existing public databases. Meanwhile, we establish the DeepLip AV lip biometrics system realized with a convolutional neural network (CNN) based video module, a time-delay neural network (TDNN) based audio module, and a multimodal fusion module. Our experiments show that DeepLip outperforms traditional speaker recognition models in context modeling and achieves over 50% relative improvements compared with our best single modality baseline, with an equal error rate of 0.75% and 1.11% on the test datasets, respectively.
Introduction
Speaker recognition has rapidly developed in the past decades. Automatic speaker verification (ASV) systems play a crucial role in many applications, such as security access, e-commerce, teleworking and in-car systems. However, there is also increasing concern that the ASV systems are vulnerable to spoofing attacks, acoustically noisy environments, far field, and other complex multifaceted scenarios. In this regard, audio-visual (AV) biometrics <|cite_start|> (Reference: Audio-visual biometrics: Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area) <|cite_end|> <|cite_start|> (Reference: The 2019 nist audio-visual speaker recognition evaluation: In 2019, the U.S. National Institute of Standards and Technology (NIST) conducted the most recent in an ongoing series of speaker recognition evaluations (SRE). There were two components to SRE19: 1) a leaderboard style Challenge using unexposed conversational telephone speech (CTS) data from the Call My Net 2 (CMN2) corpus, and 2) an Audio-Visual (AV) evaluation using video material extracted from the unexposed portions of the Video Annotation for Speech Technologies (VAST) corpus. This paper presents an overview of the Audio-Visual SRE19 activity including the task, the performance metric, data, and the evaluation protocol, results and system performance analyses. The Audio-Visual SRE19 was organized in a similar manner to the audio from video (AfV) track in SRE18, except it offered only the open training condition. In addition, instead of extracting and releasing only the AfV data, unexposed multimedia data from the VAST corpus was used to support the Audio-Visual SRE19. It featured two core evaluation tracks, namely audio only and audio-visual, as well as an optional visual only track. A total of 26 organizations (forming 14 teams) from academia and industry participated in the Audio-Visual SRE19 and submitted 102 valid system outputs. Evaluation results indicate: 1) notable performance improvements for the audio only speaker recognition task on the challenging amateur online video domain due to the use of more complex neural network architectures (e.g., ResNet) along with soft margin losses, 2) state-of-the-art speaker and face recognition technologies provide comparable person recognition performance on the in) <|cite_end|> <|cite_start|> (Reference: VoxCeleb2: Deep Speaker Recognition: The objective of this paper is speaker recognition under noisy and unconstrained conditions. We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset. Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin.) <|cite_end|> can be a viable solution. It is one of the most promising, user-friendly and low-cost biometrics that is resilient to spoofing <|cite_start|> (Reference: Audio-visual biometrics: Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area) <|cite_end|>. The incorporation of additional modalities can alleviate problems of a single modality and improve system performance. So far, AV multimodal techniques have achieved good performance in speech recognition (lipreading) <|cite_start|> (Reference: LipNet: End-to-End Sentence-level Lipreading: Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016).) <|cite_end|>, speech enhancement <|cite_start|> (Reference: An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation: Speech enhancement and speech separation are two related tasks, whose purpose is to extract either one or more target speech signals, respectively, from a mixture of sounds generated by several sources. Traditionally, these tasks have been tackled using signal processing and machine learning techniques applied to the available acoustic signals. Since the visual aspect of speech is essentially unaffected by the acoustic environment, visual information from the target speakers, such as lip movements and facial expressions, has also been used for speech enhancement and speech separation systems. In order to efficiently fuse acoustic and visual information, researchers have exploited the flexibility of data-driven approaches, specifically deep learning, achieving strong performance. The ceaseless proposal of a large number of techniques to extract features and fuse multimodal information has highlighted the need for an overview that comprehensively describes and discusses audio-visual speech enhancement and separation based on deep learning. In this paper, we provide a systematic survey of this research topic, focusing on the main elements that characterise the systems in the literature: acoustic features; visual features; deep learning methods; fusion techniques; training targets and objective functions. In addition, we review deep-learning-based methods for speech reconstruction from silent videos and audio-visual sound source separation for non-speech signals, since these methods can be more or less directly applied to audio-visual speech enhancement and separation. Finally, we survey commonly employed audio-visual speech datasets, given their central role in the development of data-driven approaches, and evaluation methods, because they are generally used to compare different systems and determine their performance.) <|cite_end|>, speech separation <|cite_start|> (Reference: Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation: We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).) <|cite_end|> and emotion recognition <|cite_start|> (Reference: Video multimodal emotion recognition based on Bi-GRU and attention fusion: ) <|cite_end|>.
Among various AV biometrics technologies <|cite_start|> (Reference: {Audio-visual speaker recognition with a cross-modal discriminative network: Audio-visual speaker recognition is one of the tasks in the recent 2019 NIST speaker recognition evaluation (SRE). Studies in neuroscience and computer science all point to the fact that vision and auditory neural signals interact in the cognitive process. This motivated us to study a cross-modal network, namely voice-face discriminative network (VFNet) that establishes the general relation between human voice and face. Experiments show that VFNet provides additional speaker discriminative information. With VFNet, we achieve 16.54% equal error rate relative reduction over the score level fusion audio-visual baseline on evaluation set of 2019 NIST SRE.) <|cite_end|> <|cite_start|> (Reference: {Audio-Visual Deep Neural Network for Robust Person Verification: Voice and face are two most popular biometrics for person verification, usually used in speaker verification and face verification tasks. It has already been observed that simply combining the information from these two modalities can lead to a more powerful and robust person verification system. In this article, to fully explore the multi-modal learning strategies for person verification, we proposed three types of audio-visual deep neural network (AVN), including feature level AVN (AVN-F), embedding level AVN (AVN-E), and embedding level combination with joint learning AVN (AVN-J). To further enhance the system robustness in real noisy conditions where not both modalities can be accessed with high-quality, we proposed several data augmentation strategies for each proposed AVN: A feature-level multi-modal data augmentation is proposed for AVN-F and an embedding-level data augmentation with novel noise distribution matching is designed for AVN-E. For AVN-J, both the feature and embedding level multi-modal data augmentation methods can be applied. All the proposed models are trained on the VoxCeleb2 dev dataset and evaluated on the standard VoxCeleb1 dataset, and the best system achieves 0.558, 0.441% and 0.793% EER on the three official trial lists of VoxCeleb1, which is to our knowledge the best published single system results on this corpus for person verification. To validate the robustness of the proposed approaches, a noisy evaluation set based on the VoxCeleb1 is constructed, and experimental results show that the proposed system can significantly boost the system robustness and still show promising performance under this noisy scenario.) <|cite_end|> <|cite_start|> (Reference: Lip Reading-Based User Authentication Through Acoustic Sensing on
Smartphones: To prevent users’ privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, and so on, to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although the state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Toward this end, we explore liveness verification of user authentication leveraging users’ mouth movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users’ speaking mouths through acoustic sensing on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users’ speaking mouths and find that there are unique mouth movement patterns for different individuals. To characterize the mouth movements, we propose a deep learning-based method to extract efficient features from Doppler profiles and employ softmax function, support vector machine, and support vector domain description to construct multi-class identifier, binary classifiers, and spoofer detectors for mouth state identification, user identification, and spoofer detection, respectively. Afterward, we develop a balanced binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.2% accuracy in user identification and 93.1% accuracy in spoofer detection.) <|cite_end|> <|cite_start|> (Reference: LVID: A multimodal biometrics authentication system on smartphones: Voice authentication is becoming increasingly popular, which offers potential benefits over knowledge and possession based authentication methods. Meanwhile, the unique features of lip movements during speaking have been proved to be useful for authentication. However, the unimodal biometric authentication systems based on either voice or lip movements have certain limitations. Voice authentication systems are prone to spoofing attacks and suffer from serious performance degradation in noisy environments. Lip movements authentication systems are unstable and are sensitive to the user’s physical and psychological conditions. In this paper, we propose and implement LVID, a multimodal biometrics authentication system on smartphones, which resolves the defects of the original systems by combining the advantages of lip movements and voice. LVID simultaneously captures these two biometrics with the built-in audio devices on smartphones and fuses them at the data level. The reliable and effective features are then extracted from the fused data for authentication. LVID is practical as it requires neither cumbersome operations nor additional hardwares but only a speaker and a microphone that are commonly available on smartphones. Our experimental results with 104 participants show that LVID can achieve 95% accuracy for user authentication, and 93.47% of the attacks can be detected. It is also verified that LVID works well with different smartphones and is robust to different smartphone positions.) <|cite_end|>, AV lip biometrics can be a foreseeably beneficial approach in future authentication systems. First, AV lip biometrics focuses on the mouth region-of-interest (ROI). The mouth ROI is tightly correlated to speech production since the lips, tongue, teeth and oral cavity are integral components of the articulator <|cite_start|> (Reference: Visual speech and coarticulation effects: The state of the art of a computer animation program showing realistic movements of an abstracted speaker's face is presented. For this purpose, video tapes with prototypic speakers have been recorded and analyzed in order to investigate the fundamental correlation between phonetic sequences of given German texts and the corresponding visual movements of articulation. Considering coarticulation effects, a 2-D-motion model was set up on a commercial PC using a set of 38 key pictures and calculating interim frames. The coordinated movements of the visible speech synthesis cover the lips, the teeth, and the tip of the tongue. The possible text input is based on an open vocabulary. The program is designed to be a training aid for lip reading for hearing-impaired people.<<ETX>>) <|cite_end|>. Therefore, AV lip biometrics can be expected to extract correlated and complementary speaker characteristics between visual and audio modalities. Second, evidence from lipreading research <|cite_start|> (Reference: Improving Speaker-Independent Lipreading with Domain-Adversarial Training: We present a Lipreading system, i.e. a speech recognition system using only visual features, which uses domain-adversarial training for speaker independence. Domain-adversarial training is integrated into the optimization of a lipreader based on a stack of feedforward and LSTM (Long Short-Term Memory) recurrent neural networks, yielding an end-to-end trainable system which only requires a very small number of frames of untranscribed target data to substantially improve the recognition accuracy on the target speaker. On pairs of different source and target speakers, we achieve a relative accuracy improvement of around 40% with only 15 to 20 seconds of untranscribed target speech data. On multi-speaker training setups, the accuracy improvements are smaller but still substantial.) <|cite_end|> <|cite_start|> (Reference: Improved Speaker Independent Lip Reading Using Speaker Adaptive Training and Deep Neural Networks: Recent improvements in tracking and feature extraction mean that speaker-dependent lip-reading of continuous speech using a medium size vocabulary (around 1000 words) is realistic. However, the recognition of previously unseen speakers has been found to be a very challenging task, because of the large variation in lip-shapes across speakers and the lack of large, tracked databases of visual features, which are very expensive to produce. By adapting a technique that is established in speech recognition but has not previously been used in lip-reading, we show that error-rates for speaker-independent lip-reading can be very significantly reduced. Furthermore, we show that error-rates can be even further reduced by the additional use of Deep Neural Networks (DNN). We also find that there is no need to map phonemes to visemes for context-dependent visual speech transcription.) <|cite_end|> indicates that variation in the speakers identity causes performance degradation, revealing that lip sequences reflect substantial speaker characteristics.
Since AV lip biometrics has been rarely investigated so far, relevant references need to be investigated by analogy from AV biometrics, lipreading, and speaker recognition. For audio modality, residual neural network (ResNet) <|cite_start|> (Reference: {Far-Field End-to-End Text-Dependent Speaker Verification Based on Mixed Training Data with Transfer Learning and Enrollment Data Augmentation: In this paper, we focus on the far-field end-to-end text-dependent speaker verification task with a small-scale far-field text dependent dataset and a large scale close-talking text in-dependent database for training. First, we show that simulating far-field text independent data from the existing large-scale clean database for data augmentation can reduce the mismatch. Second, using a small far-field text dependent data set to fine-tune the deep speaker embedding model pre-trained from the simulated far-field as well as original clean text independent data can significantly improve the system performance. Third, in special applications when using the close-talking clean utterances for enrollment and employing the real far-field noisy utterances for testing, adding reverberant noises on the clean enrollment data can further enhance the system performance. We evaluate our methods on AISHELL ASR0009 and AISHELL 2019B-eval databases and achieve an equal error rate (EER) of 5.75% for far-field text-dependent speaker verification under noisy environments.) <|cite_end|> and time-delay neural network (TDNN) systems <|cite_start|> (Reference: {Deep neural network embeddings for text-independent speaker verification: This paper investigates replacing i-vectors for text-independent speaker verification with embeddings extracted from a feed-forward deep neural network. Long-term speaker characteristics are captured in the network by a temporal pooling layer that aggregates over the input speech. This enables the network to be trained to discriminate between speakers from variable-length speech segments. After training, utterances are mapped directly to fixed-dimensional speaker embeddings and pairs of embeddings are scored using a PLDA-based backend. We compare performance with a traditional i-vector baseline on NIST SRE 2010 and 2016. We find that the embeddings outperform i-vectors for short speech segments and are competitive on long duration test conditions. Moreover, the two representations are complementary, and their fusion improves on the baseline at all operating points. Similar systems have recently shown promising results when trained on very large proprietary datasets, but to the best of our knowledge, these are the best results reported for speaker-discriminative neural networks when trained and tested on publicly available corpora.) <|cite_end|> have achieved remarkable performance for extracting deep speaker embedding.
Since speech is a time series, 1D convolution-based TDNN better captures long-term temporal dependencies of speech signals than 2D convolution-based ResNet <|cite_start|> (Reference: ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification: Current speaker verification techniques rely on a neural network to extract speaker representations. The successful x-vector architecture is a Time Delay Neural Network (TDNN) that applies statistics pooling to project variable-length utterances into fixed-length speaker characterizing embeddings. In this paper, we propose multiple enhancements to this architecture based on recent trends in the related fields of face verification and computer vision. Firstly, the initial frame layers can be restructured into 1-dimensional Res2Net modules with impactful skip connections. Similarly to SE-ResNet, we introduce Squeeze-and-Excitation blocks in these modules to explicitly model channel interdependencies. The SE block expands the temporal context of the frame layer by rescaling the channels according to global properties of the recording. Secondly, neural networks are known to learn hierarchical features, with each layer operating on a different level of complexity. To leverage this complementary information, we aggregate and propagate features of different hierarchical levels. Finally, we improve the statistics pooling module with channel-dependent frame attention. This enables the network to focus on different subsets of frames during each of the channel's statistics estimation. The proposed ECAPA-TDNN architecture significantly outperforms state-of-the-art TDNN based systems on the VoxCeleb test sets and the 2019 VoxCeleb Speaker Recognition Challenge.) <|cite_end|>. Considering a balance between high performance and light weight, the extended TDNN (E-TDNN) <|cite_start|> (Reference: Speaker recognition for multi-speaker conversations using x-vectors: Recently, deep neural networks that map utterances to fixed-dimensional embeddings have emerged as the state-of-the-art in speaker recognition. Our prior work introduced x-vectors, an embedding that is very effective for both speaker recognition and diarization. This paper combines our previous work and applies it to the problem of speaker recognition on multi-speaker conversations. We measure performance on Speakers in the Wild and report what we believe are the best published error rates on this dataset. Moreover, we find that diarization substantially reduces error rate when there are multiple speakers, while maintaining excellent performance on single-speaker recordings. Finally, we introduce an easily implemented method to remove the domain-sensitive threshold typically used in the clustering stage of a diarization system. The proposed method is more robust to domain shifts, and achieves similar results to those obtained using a well-tuned threshold.) <|cite_end|>, is a satisfactory framework for extracting audio-only speaker embedding.
For visual modality, conventional visual speech feature representation include appearance-based and shape-based features <|cite_start|> (Reference: Audio-visual biometrics: Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area) <|cite_end|>, which utilize lip geometry, parametric, or statistical models <|cite_start|> (Reference: Biometric identification system by lip shape: Biometrics systems based on lip shape recognition are of great interest, but have received little attention in the scientific literature. This is perhaps due to the belief that they have little discriminative power. However, a careful study shows that the difference between lip outlines is greater than that between shapes at different lip images of the same person. So, biometric identification by lip outline is possible. In this paper the lip outline is obtained from a color face picture: the color image is transformed to the gray scale using the transformation of Chang et al. (1994) and binarized with the Ridler and Calvar threshold. Considering the lip centroid as the origin of coordinates, each pixel lip envelope is parameterized with polar (ordered from -/spl pi/ to +/spl pi/) and Cartesian coordinates (ordered as heights and widths). To asses identity, a multilabeled multiparameter hidden Markov model is used with the polar coordinates and a multilayer neural network is applied to Cartesian coordinates. With a database of 50 people an average classification hit ratio of 96.9% and equal error ratio (EER) of 0.015 are obtained.) <|cite_end|>. Traditional lip biometrics systems usually employed delicate manual features with a shallow statistics back-end model, e.g., Gaussian mixture model (GMM) and hidden Markov model (HMM) <|cite_start|> (Reference: Speaker identification by lipreading: This paper describes a new approach for speaker identification based on lipreading. Visual features are extracted from image sequences of the talking face and consist of shape parameters which describe the lip boundary and intensity parameters which describe the grey-level distribution of the mouth area. Intensity information is based on principal component analysis using eigenspaces which deform with the shape model. The extracted parameters account for both, speech dependent and speaker dependent information. We built spatio-temporal speaker models based on these features, using HMMs with mixtures of Gaussians. Promising results were obtained for text dependent and text independent speaker identification tests performed on a small video database.) <|cite_end|>. These approaches attain acceptable performance on small datasets, but the accuracy is still far from adequate for practical applications. Deep learning has outperformed traditional machine learning methods in most tasks. However, there is no deep-learning based AV lip biometrics, mainly because of the dataset constraint. With extensive data, we can establish the deep-learning based video-only system similar to those for lipreading. The usual lipreading framework uses a convolutional neural network (CNN) for front-end visual feature extraction and a recurrent neural network (RNN) for back-end model training. The deep learning method usually employs raw features (raw lip images) instead of the above manual features. Fully 2D convolution, fully 3D convolution <|cite_start|> (Reference: LipNet: End-to-End Sentence-level Lipreading: Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016).) <|cite_end|> and a mixture of 2D and 3D convolution <|cite_start|> (Reference: Deep word embeddings for visual speech recognition: In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The embeddings summarize the information of the mouth region that is relevant to the problem of word recognition, while suppressing other types of variability such as speaker, pose and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild database. We first show that the proposed architecture goes beyond state-of-the-art on closed-set word identification, by attaining 11.92% error rate on a vocabulary of 500 words. We then examine the capacity of the embeddings in modelling words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings and perform low-shot learning experiments on words unseen during training. The experiments demonstrate that word-level visual speech recognition is feasible even in cases where the target words are not included in the training set.) <|cite_end|> <|cite_start|> (Reference: Lipreading with DenseNet and resBi-LSTM: ) <|cite_end|> were compared in <|cite_start|> (Reference: Lip Reading in the Wild: ) <|cite_end|>, and it was found that the latter can extract more discriminative deep features than 2D or 3D structures alone. Long short-term memory (LSTM) <|cite_start|> (Reference: End-To-End Visual Speech Recognition With LSTMs: Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.) <|cite_end|> <|cite_start|> (Reference: Zero-shot keyword spotting for visual speech recognition in-the-wild: Visual keyword spotting (KWS) is the problem of estimating whether a text query occurs in a given recording using only video information. This paper focuses on visual KWS for words unseen during training, a real-world, practical setting which so far has received no attention by the community. To this end, we devise an end-to-end architecture comprising (a) a state-of-the-art visual feature extractor based on spatiotemporal Residual Networks, (b) a grapheme-to-phoneme model based on sequence-to-sequence neural networks, and (c) a stack of recurrent neural networks which learn how to correlate visual features with the keyword representation. Different to prior works on KWS, which try to learn word representations merely from sequences of graphemes (i.e. letters), we propose the use of a grapheme-to-phoneme encoder-decoder model which learns how to map words to their pronunciation. We demonstrate that our system obtains very promising visual-only KWS results on the challenging LRS2 database, for keywords unseen during training. We also show that our system outperforms a baseline which addresses KWS via automatic speech recognition (ASR), while it drastically improves over other recently proposed ASR-free KWS methods.) <|cite_end|> , gated recurrent unit (GRU) <|cite_start|> (Reference: End-to-end Audiovisual Speech Recognition: Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.) <|cite_end|> and temporal convolutional neural network (TCN) <|cite_start|> (Reference: Towards Practical Lipreading with Distilled and Efficient Models: Lipreading has witnessed a lot of progress due to the resurgence of neural networks. Recent works have placed emphasis on aspects such as improving performance by finding the optimal architecture or improving generalization. However, there is still a significant gap between the current methodologies and the requirements for an effective deployment of lipreading in practical scenarios. In this work, we propose a series of innovations that significantly bridge that gap: first, we raise the state-of-the-art performance by a wide margin on LRW and LRW-1000 to 88.5% and 46.6%, respectively using self-distillation. Secondly, we propose a series of architectural changes, including a novel Depthwise Separable Temporal Convolutional Network (DS-TCN) head, that slashes the computational cost to a fraction of the (already quite efficient) original model. Thirdly, we show that knowledge distillation is a very effective tool for recovering performance of the lightweight models. This results in a range of models with different accuracy-efficiency trade-offs. However, our most promising lightweight models are on par with the current state-of-the-art while showing a reduction of 8.2x and 3.9x in terms of computational cost and number of parameters, respectively, which we hope will enable the deployment of lipreading models in practical applications.) <|cite_end|> <|cite_start|> (Reference: Lipreading using Temporal Convolutional Networks: Lip-reading has attracted a lot of research attention lately thanks to advances in deep learning. The current state-of-the-art model for recognition of isolated words in-the-wild consists of a residual network and Bidirectional Gated Recurrent Unit (BGRU) layers. In this work, we address the limitations of this model and we propose changes which further improve its performance. Firstly, the BGRU layers are replaced with Temporal Convolutional Networks (TCN). Secondly, we greatly simplify the training procedure, which allows us to train the model in one single stage. Thirdly, we show that the current state-of-the-art methodology produces models that do not generalize well to variations on the sequence length, and we addresses this issue by proposing a variable-length augmentation. We present results on the largest publicly-available datasets for isolated word recognition in English and Mandarin, LRW and LRW1000, respectively. Our proposed model results in an absolute improvement of 1.2% and 3.2%, respectively, in these datasets which is the new state-of-the-art performance.) <|cite_end|> models were designed for modeling the temporal dynamics of the sequence. TCN combined the RNN and CNN architecture for sequence modeling, and the training speed and performance outperformed the state-of-the-art lipreading system.
In this paper, our contribution mainly focuses on the following aspects. Firstly, deep learning-based AV lip biometrics is limited due to the lack of adequate AV speaker data. We establish an audio-visual speaker database using public datasets and a deep-learning based baseline called DeepLip, which aggregates two well-performed systems in lipreading (visual-only) and speaker recognition (audio-only). Secondly, we show the effectiveness of applying deep learning to extract deep lip features, doing away laborious feature engineering. Thirdly, the fusion of speech and visual speech preliminarily proves that correlated and complementary information between visual speech and audible speech can be well utilized in person authentication. For multimodal fusion, <|cite_start|> (Reference: Audio-visual biometrics: Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area) <|cite_end|> <|cite_start|> (Reference: An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation: Speech enhancement and speech separation are two related tasks, whose purpose is to extract either one or more target speech signals, respectively, from a mixture of sounds generated by several sources. Traditionally, these tasks have been tackled using signal processing and machine learning techniques applied to the available acoustic signals. Since the visual aspect of speech is essentially unaffected by the acoustic environment, visual information from the target speakers, such as lip movements and facial expressions, has also been used for speech enhancement and speech separation systems. In order to efficiently fuse acoustic and visual information, researchers have exploited the flexibility of data-driven approaches, specifically deep learning, achieving strong performance. The ceaseless proposal of a large number of techniques to extract features and fuse multimodal information has highlighted the need for an overview that comprehensively describes and discusses audio-visual speech enhancement and separation based on deep learning. In this paper, we provide a systematic survey of this research topic, focusing on the main elements that characterise the systems in the literature: acoustic features; visual features; deep learning methods; fusion techniques; training targets and objective functions. In addition, we review deep-learning-based methods for speech reconstruction from silent videos and audio-visual sound source separation for non-speech signals, since these methods can be more or less directly applied to audio-visual speech enhancement and separation. Finally, we survey commonly employed audio-visual speech datasets, given their central role in the development of data-driven approaches, and evaluation methods, because they are generally used to compare different systems and determine their performance.) <|cite_end|> <|cite_start|> (Reference: Multimodal Intelligence: Representation Learning, Information Fusion, and Applications: Deep learning methods have revolutionized speech recognition, image recognition, and natural language processing since 2010. Each of these tasks involves a single modality in their input signals. However, many applications in the artificial intelligence field involve multiple modalities. Therefore, it is of broad interest to study the more difficult and complex problem of modeling and learning across multiple modalities. In this paper, we provide a technical review of available models and learning methods for multimodal intelligence. The main focus of this review is the combination of vision and natural language modalities, which has become an important topic in both the computer vision and natural language processing research communities. This review provides a comprehensive analysis of recent works on multimodal deep learning from three perspectives: learning multimodal representations, fusing multimodal signals at various levels, and multimodal applications. Regarding multimodal representation learning, we review the key concepts of embedding, which unify multimodal signals into a single vector space and thereby enable cross-modality signal processing. We also review the properties of many types of embeddings that are constructed and learned for general downstream tasks. Regarding multimodal fusion, this review focuses on special architectures for the integration of representations of unimodal signals for a particular task. Regarding applications, selected areas of a broad interest in the current literature are covered, including image-to-text caption generation, text-to-image generation, and visual question answering. We believe that this review will facilitate future studies in the emerging field of multimodal intelligence for related communities.) <|cite_end|> have discussed common approaches to realize the fusion of different modalities. The speaker embedding in AV lip biometrics consists of audio-only and video-only speaker embedding, which is an enhancement compared to the original single modality. <|paper_end|> | [
"<|reference_start|> Audio-visual biometrics: Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area <|reference_end|>",
"<|reference_start|> Improved Speaker Independent Lip Reading Using Speaker Adaptive Training and Deep Neural Networks: Recent improvements in tracking and feature extraction mean that speaker-dependent lip-reading of continuous speech using a medium size vocabulary (around 1000 words) is realistic. However, the recognition of previously unseen speakers has been found to be a very challenging task, because of the large variation in lip-shapes across speakers and the lack of large, tracked databases of visual features, which are very expensive to produce. By adapting a technique that is established in speech recognition but has not previously been used in lip-reading, we show that error-rates for speaker-independent lip-reading can be very significantly reduced. Furthermore, we show that error-rates can be even further reduced by the additional use of Deep Neural Networks (DNN). We also find that there is no need to map phonemes to visemes for context-dependent visual speech transcription. <|reference_end|>",
"<|reference_start|> Speaker identification by lipreading: This paper describes a new approach for speaker identification based on lipreading. Visual features are extracted from image sequences of the talking face and consist of shape parameters which describe the lip boundary and intensity parameters which describe the grey-level distribution of the mouth area. Intensity information is based on principal component analysis using eigenspaces which deform with the shape model. The extracted parameters account for both, speech dependent and speaker dependent information. We built spatio-temporal speaker models based on these features, using HMMs with mixtures of Gaussians. Promising results were obtained for text dependent and text independent speaker identification tests performed on a small video database. <|reference_end|>",
"<|reference_start|> Lipreading with DenseNet and resBi-LSTM: <|reference_end|>"
] | [
0,
14,
21,
24
] | {"<|multi_cite_1_1|>": "ss-1557058", "<|multi_cite_1_2|>": "ss-950895", "<|multi_cite_1_3|>": "arxiv-162541", "<|cite_2|>": "ss-1557058", "<|cite_3|>": "arxiv-109411", "<|cite_4|>": "arxiv-285865", "<|cite_5|>": "arxiv-154470", "<|cite_6|>": "ss-2146306", "<|multi_cite_7_1|>": "ss-1473140", "<|multi_cite_7_2|>": "ss-1473139", "<|multi_cite_7_3|>": "ss-1749163", "<|multi_cite_7_4|>": "ss-1457142", "<|cite_8|>": "ss-2146307", "<|multi_cite_9_1|>": "arxiv-131197", "<|multi_cite_9_2|>": "ss-847711", "<|cite_10|>": "ss-703162", "<|cite_11|>": "ss-895607", "<|cite_12|>": "arxiv-265520", "<|cite_13|>": "ss-1519604", "<|cite_14|>": "ss-1557058", "<|cite_15|>": "ss-2146308", "<|cite_16|>": "ss-2146309", "<|cite_17|>": "arxiv-109411", "<|multi_cite_18_1|>": "arxiv-138647", "<|multi_cite_18_2|>": "ss-830077", "<|cite_19|>": "ss-1516617", "<|multi_cite_20_1|>": "arxiv-114790", "<|multi_cite_20_2|>": "arxiv-166806", "<|cite_21|>": "arxiv-148775", "<|multi_cite_22_1|>": "arxiv-278123", "<|multi_cite_22_2|>": "arxiv-244661", "<|multi_cite_23_1|>": "ss-1557058", "<|multi_cite_23_2|>": "arxiv-285865", "<|multi_cite_23_3|>": "arxiv-233395"} |
2001.07585 | <|paper_start|> Title: Proactive Certificate Validation for VANETs
Abstract: Proactive Certificate Validation for VANETs: Security and privacy in Vehicular Ad-hoc Networks (VANETs) mandates use of short-lived credentials (pseudonyms) and cryptographic key pairs. This implies significant computational overhead for vehicles, needing to validate often numerous such pseudonyms within a short period. To alleviate such a bottleneck that could even place vehicle safety at risk, we propose a proactive pseudonym validation approach based on Bloom Filters (BFs). We show that our scheme could liberate computational resources for other (safety- and time-critical) operations with reasonable communication overhead without compromising security and privacy.
Introduction
\ac{V2V} communication improves road safety and traffic efficiency with safety beacons, broadcasted at a high rate to provide cooperative awareness. Short-term credentials, i.e., pseudonyms obtained from the \ac{VPKI} through protocols as in, e.g., <|cite_start|> (Reference: 2011 IEEE Vehicular Networking Conference, IEEE VNC 2011, Amsterdam, The Netherlands, November 14-16, 2011: ) <|cite_end|> <|cite_start|> (Reference: 2011 IEEE Vehicular Networking Conference, IEEE VNC 2011, Amsterdam, The Netherlands, November 14-16, 2011: ) <|cite_end|>, are used for message (beacon) authentication and integrity while protecting user privacy. Pseudonyms with overlapping or non-overlapping lifetimes can be preloaded, for a long period (e.g., 1 year) or be requested on-demand (e.g., on a daily basis). In a multi-domain \ac{VC} system, pseudonyms in a domain are generally issued by the \ac{PCA} dedicated to that domain, and a vehicle that wishes to enter another (foreign) domain should request pseudonyms from the corresponding \ac{PCA}. For ease of explanation, we assume the domains are separated geographically in the rest of the paper.
Pseudonyms are changed over time for message unlinkablity. Due to mobility of vehicles, the neighborhood of a vehicle can be volatile, thus, having new pseudonyms received practically continuously. The challenge is that all such digitally signed new pseudonyms must be validated in order to verify messages. Certificate omission <|cite_start|> (Reference: On the Performance of Secure Vehicular Communication Systems: Vehicular communication (VC) systems are being developed primarily to enhance transportation safety and efficiency. Vehicle-to-vehicle communication, in particular, frequent cooperative awareness messages or safety beacons, has been considered over the past years as a main approach. Meanwhile, the need to provide security and to safeguard users' privacy is well understood, and security architectures for VC systems have been proposed. Although technical approaches to secure VC have several commonalities and a consensus has formed, there are critical questions that have remained largely unanswered: Are the proposed security and privacy schemes practical? Can the secured VC systems support the VC-enabled applications as effectively as unsecured VC would? How should security be designed so that its integration into a VC system has a limited effect on the system's performance? In this paper, we provide answers to these questions, investigating the joint effect of a set of system parameters and components. We consider the state-of-the-art approach in secure VC, and we evaluate analytically and through simulations the interdependencies among components and system characteristics. Overall, we identify key design choices for the deployment of efficient, effective, and secure VC systems.) <|cite_end|>, and optimistic or probabilistic message validations <|cite_start|> (Reference: Scaling VANET security through cooperative message verification: VANET security introduces significant processing overhead for resource-constrained On-Board Units (OBUs). Here, we propose a novel scheme that allows secure Vehicular Communication (VC) systems to scale well beyond network densities for which existing optimization approaches could be workable, without compromising security (and privacy).) <|cite_end|> <|cite_start|> (Reference: Wireless channel-based message authentication: Inter-vehicle communication has attracted a lot of attention in the past. A major concern is the security and especially the integrity and authenticity of messages. Current standards and proposals in literature leverage asymmetric cryptographic mechanisms to achieve this, which is costly both in terms of consumed computational power, bandwidth, and introduced delay. We present a novel idea to use physical characteristics of the wireless channel to verify subsequent messages after initial trust in a first message has been established cryptographically. In this paper, we sketch the concept and provide a first evaluation on its potential for saving named resources.) <|cite_end|> <|cite_start|> (Reference: PBA: Prediction-Based Authentication for Vehicle-to-Vehicle Communications: In vehicular networks, broadcast communications are critically important, as many safety-related applications rely on single-hop beacon messages broadcast to neighbor vehicles. However, it becomes a challenging problem to design a broadcast authentication scheme for secure vehicle-to-vehicle communications. Especially when a large number of beacons arrive in a short time, vehicles are vulnerable to computation-based Denial of Service (DoS) attacks that excessive signature verification exhausts their computational resources. In this paper, we propose an efficient broadcast authentication scheme called Prediction-Based Authentication (PBA) to not only defend against computation-based DoS attacks, but also resist packet losses caused by high mobility of vehicles. In contrast to most existing authentication schemes, our PBA is an efficient and lightweight scheme since it is primarily built on symmetric cryptography. To further reduce the verification delay for some emergency applications, PBA is designed to exploit the sender vehicle's ability to predict future beacons in advance. In addition, to prevent memory-based DoS attacks, PBA only stores shortened re-keyed Message Authentication Codes (MACs) of signatures without decreasing security. We analyze the security of our scheme and simulate PBA under varying vehicular network scenarios. The results demonstrate that PBA fast verifies almost 99 percent messages with low storage cost not only in high-density traffic environments but also in lossy wireless environments.) <|cite_end|> have been proposed, but they do not reduce pseudonym validation overhead. In some situations, a vehicle could receive a very large number of new pseudonyms within a short period (e.g., around a mix-zone <|cite_start|> (Reference: {Mix-zones for Location Privacy in Vehicular Networks: Vehicular Networks (VNs) seek to provide, among other applications, safer driving conditions. To do so, vehicles need to periodically broadcast safety messages providing preciseposition information ...) <|cite_end|>, where all vehicles would change their pseudonyms).
We propose a \acf{BF} based pseudonym validation scheme. Instead of verifying the \ac{PCA} signature for each and every pseudonym, the pseudonyms are validated through a \ac{BF} published by the \ac{PCA}, which includes all pseudonyms valid within a protocol selectable period. Once the \ac{BF} is verified and stored, a vehicle can efficiently validate the pseudonyms based on cheap hash computations with reasonably low false positive rate.
We require that all the pseudonyms are still signed by the \ac{PCA} and the messages be signed under the pseudonyms. This ensures that a fallback approach (i.e., \ac{PCA} signature verification on each and every pseudonym) can be invoked when suspicious behavior is detected. We show that our scheme could reduce computational overhead. Although an attacker could launch a brute false attack targeting the false positive rate of the \ac{BF} (attempting to inject messages signed under fictitious pseudonyms), we show that such an attack is expensive and could cause minimal harm to the system.
In the rest of the paper, we describe the adversary model (Sec.~\ref{sec:model}), present our pseudonym validation scheme inspired by <|cite_start|> (Reference: Multi-user broadcast authentication in wireless sensor networks: Broadcast authentication is a critical security service in wireless sensor networks (WSNs), as it allows the mobile users of WSNs to broadcast messages to multiple sensor nodes in a secure way. Although symmetric-key- based solutions such as muTESLA and multilevel muTESLA have been proposed, they all suffer from severe energy- depletion attacks resulted from the nature of delayed message authentication. This paper presents several efficient public-key-based schemes to achieve immediate broadcast authentication and thus avoid the security vulnerability intrinsic to muTESLA-like schemes. Our schemes are built upon the unique integration of several cryptographic techniques, including the Bloom filter, the partial message recovery signature scheme and the Merkle hash tree. We prove the effectiveness and efficiency of the proposed schemes by a comprehensive quantitative analysis of their energy consumption in both computation and communication.) <|cite_end|> (Sec.~\ref{sec:scheme}), provide a security and privacy analysis (Sec.~\ref{sec:analysis}), and a preliminary evaluation of our scheme (Sec.~\ref{sec:evaluation}) before some concluding remarks (Sec.~\ref{sec:conclusion}). <|paper_end|> | [
"<|reference_start|> 2011 IEEE Vehicular Networking Conference, IEEE VNC 2011, Amsterdam, The Netherlands, November 14-16, 2011: <|reference_end|>",
"<|reference_start|> On the Performance of Secure Vehicular Communication Systems: Vehicular communication (VC) systems are being developed primarily to enhance transportation safety and efficiency. Vehicle-to-vehicle communication, in particular, frequent cooperative awareness messages or safety beacons, has been considered over the past years as a main approach. Meanwhile, the need to provide security and to safeguard users' privacy is well understood, and security architectures for VC systems have been proposed. Although technical approaches to secure VC have several commonalities and a consensus has formed, there are critical questions that have remained largely unanswered: Are the proposed security and privacy schemes practical? Can the secured VC systems support the VC-enabled applications as effectively as unsecured VC would? How should security be designed so that its integration into a VC system has a limited effect on the system's performance? In this paper, we provide answers to these questions, investigating the joint effect of a set of system parameters and components. We consider the state-of-the-art approach in secure VC, and we evaluate analytically and through simulations the interdependencies among components and system characteristics. Overall, we identify key design choices for the deployment of efficient, effective, and secure VC systems. <|reference_end|>",
"<|reference_start|> Wireless channel-based message authentication: Inter-vehicle communication has attracted a lot of attention in the past. A major concern is the security and especially the integrity and authenticity of messages. Current standards and proposals in literature leverage asymmetric cryptographic mechanisms to achieve this, which is costly both in terms of consumed computational power, bandwidth, and introduced delay. We present a novel idea to use physical characteristics of the wireless channel to verify subsequent messages after initial trust in a first message has been established cryptographically. In this paper, we sketch the concept and provide a first evaluation on its potential for saving named resources. <|reference_end|>",
"<|reference_start|> Multi-user broadcast authentication in wireless sensor networks: Broadcast authentication is a critical security service in wireless sensor networks (WSNs), as it allows the mobile users of WSNs to broadcast messages to multiple sensor nodes in a secure way. Although symmetric-key- based solutions such as muTESLA and multilevel muTESLA have been proposed, they all suffer from severe energy- depletion attacks resulted from the nature of delayed message authentication. This paper presents several efficient public-key-based schemes to achieve immediate broadcast authentication and thus avoid the security vulnerability intrinsic to muTESLA-like schemes. Our schemes are built upon the unique integration of several cryptographic techniques, including the Bloom filter, the partial message recovery signature scheme and the Merkle hash tree. We prove the effectiveness and efficiency of the proposed schemes by a comprehensive quantitative analysis of their energy consumption in both computation and communication. <|reference_end|>"
] | [
1,
2,
4,
7
] | {"<|multi_cite_1_2|>": "ss-2381480", "<|multi_cite_1_3|>": "ss-2381480", "<|cite_2|>": "ss-976302", "<|multi_cite_3_1|>": "ss-2385809", "<|multi_cite_3_2|>": "ss-2385810", "<|multi_cite_3_3|>": "ss-1212466", "<|cite_4|>": "ss-1059249", "<|cite_5|>": "ss-2385811"} |
1805.03257 | <|paper_start|> Title: Multimodal Hierarchical Reinforcement Learning Policy for Task-Oriented Visual Dialog
Abstract: Multimodal Hierarchical Reinforcement Learning Policy for Task-Oriented Visual Dialog: Creating an intelligent conversational system that understands vision and language is one of the ultimate goals in Artificial Intelligence (AI)~\cite{winograd1972understanding}. Extensive research has focused on vision-to-language generation, however, limited research has touched on combining these two modalities in a goal-driven dialog context. We propose a multimodal hierarchical reinforcement learning framework that dynamically integrates vision and language for task-oriented visual dialog. The framework jointly learns the multimodal dialog state representation and the hierarchical dialog policy to improve both dialog task success and efficiency. We also propose a new technique, state adaptation, to integrate context awareness in the dialog state representation. We evaluate the proposed framework and the state adaptation technique in an image guessing game and achieve promising results.
Introduction
The interplay between vision and language has created a range of interesting applications, including image captioning <|cite_start|> (Reference: Deep Visual-Semantic Alignments for Generating Image Descriptions: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.) <|cite_end|>, visual question generation (VQG) <|cite_start|> (Reference: Generating Natural Questions About an Image: There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images. These tasks have focused on literal descriptions of the image. To move beyond the literal, we choose to explore how questions about an image are often directed at commonsense inference and the abstract events evoked by objects in the image. In this paper, we introduce the novel task of Visual Question Generation (VQG), where the system is tasked with asking a natural and engaging question when shown an image. We provide three datasets which cover a variety of images from object-centric to event-centric, with considerably more abstract training data than provided to state-of-the-art captioning systems thus far. We train and test several generative and retrieval models to tackle the task of VQG. Evaluation results show that while such models ask reasonable questions for a variety of images, there is still a wide gap with human performance which motivates further work on connecting images with commonsense knowledge and pragmatics. Our proposed task offers a new challenge to the community which we hope furthers interest in exploring deeper connections between vision & language.) <|cite_end|>, visual question answering (VQA) <|cite_start|> (Reference: VQA: Visual Question Answering: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).) <|cite_end|>, and reference expressions <|cite_start|> (Reference: Natural Language Object Retrieval: In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.) <|cite_end|>. Visual dialog <|cite_start|> (Reference: Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning: We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask/answer about certain visual attributes (shape/color/style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team.) <|cite_end|> extends the VQA problem to multi-turn visual-grounded conversations without specific goals. In this paper, we study the task-oriented visual dialog setting that requires the agent to learn the multimodal representation and
\noindent dialog policy for decision making. We argue that a task-oriented visual intelligent conversational system should not only acquire vision and language understanding but also make appropriate decisions efficiently in a situated environment. Specifically, we designed a 20 images guessing game using the Visual Dialog dataset <|cite_start|> (Reference: Visual Dialog: We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org) <|cite_end|>. This game is the visual analog of the popular 20 question game. The agent aims to learn a dialog policy that can guess the correct image through question answering using the minimum number of turns.
Previous work on visual dialogs <|cite_start|> (Reference: Visual Dialog: We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org) <|cite_end|> <|cite_start|> (Reference: Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning: We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask/answer about certain visual attributes (shape/color/style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team.) <|cite_end|> <|cite_start|> (Reference: Evaluating Visual Conversational Agents via Cooperative Human-AI Games: As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.) <|cite_end|> focused mainly on vision-to-language understanding and generation instead of dialog policy learning. They let an agent ask a fixed number of questions to rank the images or let humans make guesses at the end of the conversations. However, such setting is not realistic in real-world task-oriented applications, because in task-oriented applications, not only completing the task successfully is important but also completing it efficiently. In addition, the agent should also be informed of the wrong guesses, so that it becomes more aware of the vision context. However, solving such real-world setting is a challenge. The system needs to handle the large dynamically updated multimodal state-action space and also leverage the signals in the feedback loop coming from different sub-tasks.
We propose a \emph{multimodal hierarchical reinforcement learning} framework that allows learning visual dialog state tracking and dialog policy jointly to complete visual dialog tasks efficiently. The framework we propose takes inspiration from feudal reinforcement learning (FRL) <|cite_start|> (Reference: Feudal Reinforcement Learning: One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-learning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Submanagers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command.
We illustrate the system using a simple maze task. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-learning and builds a more comprehensive map.) <|cite_end|>, where levels of hierarchy within an agent communicate via explicit goals in a top-down fashion. In our case, it decomposes the decision into two steps: a first step where a master policy selects between verbal task (information query) and vision task (image retrieval), and a second step where a primitive action (question or image) is chosen from the selected task. Hierarchical RL that relies on space abstraction, such as FRL, is useful to address the challenge of large discrete action space and has been shown to be effective in dialog systems, especially for large domain dialog management <|cite_start|> (Reference: Feudal Reinforcement Learning for Dialogue Management in Large Domains: Reinforcement learning (RL) is a promising approach to solve dialogue policy optimisation. Traditional RL algorithms, however, fail to scale to large domains due to the curse of dimensionality. We propose a novel Dialogue Management architecture, based on Feudal RL, which decomposes the decision into two steps; a first step where a master policy selects a subset of primitive actions, and a second step where a primitive action is chosen from the selected subset. The structural information included in the domain ontology is used to abstract the dialogue state space, taking the decisions at each step using different parts of the abstracted state. This, combined with an information sharing mechanism between slots, increases the scalability to large domains. We show that an implementation of this approach, based on Deep-Q Networks, significantly outperforms previous state of the art in several dialogue domains and environments, without the need of any additional reward signal.) <|cite_end|>. Besides, we propose a new technique called \emph{state adaptation} in order to make the multimodal dialog state more aware of the constantly changing visual context. We demonstrate the efficacy of this technique through ablation analysis.
Related Work
\subsection{Visual Dialog}
Visual dialog requires the agent to hold a multi-turn conversation about visual content. Several visual dialog tasks have been developed, including image grounded conversation generation <|cite_start|> (Reference: Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation: The popularity of image sharing on social media and the engagement it creates between users reflects the important role that visual context plays in everyday conversations. We present a novel task, Image-Grounded Conversations (IGC), in which natural-sounding conversations are generated about a shared image. To benchmark progress, we introduce a new multiple-reference dataset of crowd-sourced, event-centric conversations on images. IGC falls on the continuum between chit-chat and goal-directed conversation models, where visual grounding constrains the topic of conversation to event-driven utterances. Experiments with models trained on social media data show that the combination of visual and textual context enhances the quality of generated conversational turns. In human evaluation, the gap between human performance and that of both neural and retrieval architectures suggests that multi-modal IGC presents an interesting challenge for dialogue research.) <|cite_end|>. Guess What?! <|cite_start|> (Reference: GuessWhat?! Visual object discovery through multi-modal dialogue: We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.) <|cite_end|> involves locating visual objects using dialogs. VisDial <|cite_start|> (Reference: Visual Dialog: We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org) <|cite_end|> situates an answer-bot (A-Bot) to answer questions from a question-bot (Q-Bot) about an image. <|cite_start|> (Reference: Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning: We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask/answer about certain visual attributes (shape/color/style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team.) <|cite_end|> applied reinforcement learning (RL) to the VisDial task to learn the policies for the Q/A-Bots to collaboratively rank the correct image among a set of candidates. However, their Q-Bot can only ask questions and cannot make guesses. <|cite_start|> (Reference: Evaluating Visual Conversational Agents via Cooperative Human-AI Games: As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.) <|cite_end|> further evaluated the pre-trained A-bot in a similar setting to answer human generated questions. Since humans are tasked to ask questions, the policy learning of Q-Bot is not investigated. Finally, <|cite_start|> (Reference: Using Reinforcement Learning to Model Incrementality in a Fast-Paced Dialogue Game: We apply Reinforcement Learning (RL) to the problem of incremental dialogue policy learning in the context of a fast-paced dialogue game. We compare the policy learned by RL with a high-performance baseline policy which has been shown to perform very efficiently (nearly as well as humans) in this dialogue game. The RL policy outperforms the baseline policy in offline simulations (based on real user data). We provide a detailed comparison of the RL policy and the baseline policy, including information about how much effort and time it took to develop each one of them. We also highlight the cases where the RL policy performs better, and show that understanding the RL policy can provide valuable insights which can inform the creation of an even better rule-based policy.) <|cite_end|> proposed a incremental dialogue policy learning method for image guessing. However, their dialog state only used language information and did not include visual information. We build upon prior works and propose a framework that learns an optimal dialog policy for the Q-Bot to perform both question selection and image guessing through exploiting multimodal information.
\subsection{Reinforcement Learning}
RL is a popular approach to learn an optimal dialog policy for task-oriented dialog systems <|cite_start|> (Reference: Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System: Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance.) <|cite_end|> <|cite_start|> (Reference: Partially observable Markov decision processes for spoken dialog systems: ) <|cite_end|> <|cite_start|> (Reference: Reinforcement learning of argumentation dialogue policies in negotiation: We build dialogue system policies for negotiation, and in particular for argumentation. These dialogue policies are designed for negotiation against users of different cultural norms (individualists, collectivists, and altruists). In order to learn these policies we build simulated users (SUs), i.e. models that simulate the behavior of real users, and use Reinforcement Learning (RL). The SUs are trained on a spoken dialogue corpus in a negotiation domain, and then tweaked towards a particular cultural norm using hand-crafted rules. We evaluate the learned policies in a simulation setting. Our results are consistent with our SUs, in other words, the policies learn what they are designed to learn, which shows that RL is a promising technique for learning policies in domains, such as argumentation, that are more complex than standard slot-filling applications. Index Terms: spoken dialogue systems, reinforcement learning, simulated users, argumentation, negotiation, culture.) <|cite_end|> <|cite_start|> (Reference: Pomdp-based let's go system for spoken dialog challenge: This paper describes a POMDP-based Let's Go system which incorporates belief tracking and dialog policy optimization into the dialog manager of the reference system for the Spoken Dialog Challenge (SDC). Since all components except for the dialog manager were kept the same, component-wise comparison can be performed to investigate the effect of belief tracking and dialog policy optimization on the overall system performance. In addition, since unsupervised methods have been adopted to learn all required models to reduce human labor and development time, the effectiveness of the unsupervised approaches compared to conventional supervised approaches can be investigated. The result system participated in the 2011 SDC and showed comparable performance with the base system which has been enhanced from the reference system for the 2010 SDC. This shows the capability of the proposed method to rapidly produce an effective system with minimal human labor and experts' knowledge.) <|cite_end|> <|cite_start|> (Reference: Learning Conversational Systems that Interleave Task and Non-Task Content: Task-oriented dialog systems have been applied in various tasks, such as automated personal assistants, customer service providers and tutors. These systems work well when users have clear and explicit intentions that are well-aligned to the systems' capabilities. However, they fail if users intentions are not explicit. To address this shortcoming, we propose a framework to interleave non-task content (i.e. everyday social conversation) into task conversations. When the task content fails, the system can still keep the user engaged with the non-task content. We trained a policy using reinforcement learning algorithms to promote long-turn conversation coherence and consistency, so that the system can have smooth transitions between task and non-task content. To test the effectiveness of the proposed framework, we developed a movie promotion dialog system. Experiments with human users indicate that a system that interleaves social and task content achieves a better task success rate and is also rated as more engaging compared to a pure task-oriented system.) <|cite_end|>. The deep Q-Network (DQN) introduced by <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|> achieved human-level performance in Atari games based on deep neural networks. Deep RL was then used to jointly learn the dialog state tracking and policy optimization in an end-to-end manner <|cite_start|> (Reference: Towards End-to-End Learning for Dialog State Tracking and Management using Deep Reinforcement Learning: This paper presents an end-to-end framework for task-oriented dialog systems using a variant of Deep Recurrent Q-Networks (DRQN). The model is able to interface with a relational database and jointly learn policies for both language understanding and dialog strategy. Moreover, we propose a hybrid algorithm that combines the strength of reinforcement learning and supervised learning to achieve faster learning speed. We evaluated the proposed model on a 20 Question Game conversational game simulator. Results show that the proposed method outperforms the modular-based baseline and learns a distributed representation of the latent dialog state.) <|cite_end|>. In our framework, we use a DQN to learn the higher level policy for question selection or image guessing. <|cite_start|> (Reference: Deep Reinforcement Learning with Double Q-learning: The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.) <|cite_end|> proposed a double DQN to overcome the overestimation problem in the Q-Learning and <|cite_start|> (Reference: Prioritized Experience Replay: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.) <|cite_end|> suggested prioritized experience replay to improve the data sampling efficiency for training DQN. We apply both techniques in our implementation. One limitation of DQNs is that they cannot handle unbounded action space, which is often the case for natural language interaction. <|cite_start|> (Reference: Deep Reinforcement Learning with a Natural Language Action Space: This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Q-learning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.) <|cite_end|> proposed Deep Reinforcement Relevance Network (DRRN) that can handle inherently large discrete natural language action space. Specifically, the DRRN takes both the state and natural language actions as inputs and computes a Q-value for each state action pair. Thus, we use a DRRN as our question selection policy to approximate the value function for any question candidate.
Our work is also related to hierarchical reinforcement learning (HRL) which often decomposes the problem into several sub-problems and achieves better learning convergence rate and generalization compared to flat RL <|cite_start|> (Reference: Between {{MDPs}} and semi-{{MDPs}}: {{A}} framework for temporal abstraction in reinforcement learning: Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options —closed-loop policies for taking action over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning framework in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic programming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: (1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, (2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and (3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state) <|cite_end|> <|cite_start|> (Reference: Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition: This paper presents the MAXQ approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges wih probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.) <|cite_end|>. HRL has been applied to dialog management <|cite_start|> (Reference: Hierarchical Reinforcement Learning of Dialogue Policies in a development environment for dialogue systems: REALL-DUDE: A pitcher's mound on, which a pitcher in the sport of Baseball stands while pitching the ball, is collapsible, and is made up of several wedge-shaped segments which are connected to a central hub in an easily releasable manner. The segments and the central hub are quite rigid yet are of a size and weight which makes them reasonably portable. The surface of the pitcher's mound can be such as to absorb impact when struck by a ball and to provide good footing for the pitcher.) <|cite_end|> <|cite_start|> (Reference: Evaluation of a hierarchical reinforcement learning spoken dialogue system: ) <|cite_end|> <|cite_start|> (Reference: Sub-domain Modelling for Dialogue Management with Hierarchical Reinforcement Learning: Human conversation is inherently complex, often spanning many different topics/domains. This makes policy learning for dialogue systems very challenging. Standard flat reinforcement learning methods do not provide an efficient framework for modelling such dialogues. In this paper, we focus on the under-explored problem of multi-domain dialogue management. First, we propose a new method for hierarchical reinforcement learning using the option framework. Next, we show that the proposed architecture learns faster and arrives at a better policy than the existing flat ones do. Moreover, we show how pretrained policies can be adapted to more complex systems with an additional set of new actions. In doing that, we show that our approach has the potential to facilitate policy optimisation for more sophisticated multi-domain dialogue systems.) <|cite_end|> which decomposes the dialog policy with respect to system goals or domains. When the system enters a sub-task, the selected dialog policy will be used and continue to operate until the sub-problem is solved, however the terminate condition for a subproblem has to be predefined. Different from prior work, our proposed architecture uses hierarchical dialog policy to combine two RL architectures within a control flow, i.e., DQN and DRRN, in order to jointly learn multimodal dialog state representation and dialog policy. Note that our HRL framework resembles the FRL hierarchy <|cite_start|> (Reference: Feudal Reinforcement Learning: One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-learning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Submanagers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command.
We illustrate the system using a simple maze task. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-learning and builds a more comprehensive map.) <|cite_end|> that exploits space abstraction, state sharing and sequential execution.
\begin{figure*}[ht]
\includegraphics[width=16cm, height=9cm]{Architecture.PNG}
\caption{The information flow of the multimodal hierarchical reinforcement learning framework}
\centering
\end{figure*} <|paper_end|> | [
"<|reference_start|> VQA: Visual Question Answering: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa). <|reference_end|>",
"<|reference_start|> Prioritized Experience Replay: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games. <|reference_end|>",
"<|reference_start|> Hierarchical Reinforcement Learning of Dialogue Policies in a development environment for dialogue systems: REALL-DUDE: A pitcher's mound on, which a pitcher in the sport of Baseball stands while pitching the ball, is collapsible, and is made up of several wedge-shaped segments which are connected to a central hub in an easily releasable manner. The segments and the central hub are quite rigid yet are of a size and weight which makes them reasonably portable. The surface of the pitcher's mound can be such as to absorb impact when struck by a ball and to provide good footing for the pitcher. <|reference_end|>",
"<|reference_start|> Evaluation of a hierarchical reinforcement learning spoken dialogue system: <|reference_end|>"
] | [
2,
25,
29,
30
] | {"<|cite_1|>": "arxiv-69800", "<|cite_2|>": "arxiv-94247", "<|cite_3|>": "arxiv-77148", "<|cite_4|>": "arxiv-87106", "<|cite_5|>": "arxiv-119496", "<|cite_6|>": "arxiv-111063", "<|multi_cite_7_1|>": "arxiv-111063", "<|multi_cite_7_2|>": "arxiv-119496", "<|multi_cite_7_3|>": "arxiv-132142", "<|cite_8|>": "ss-1541317", "<|cite_9|>": "arxiv-150953", "<|cite_10|>": "arxiv-115350", "<|cite_11|>": "arxiv-111018", "<|cite_12|>": "arxiv-111063", "<|cite_19|>": "arxiv-119496", "<|cite_20|>": "arxiv-132142", "<|cite_13|>": "ss-1721036", "<|multi_cite_14_1|>": "arxiv-21972", "<|multi_cite_14_2|>": "ss-1003595", "<|multi_cite_14_3|>": "ss-776724", "<|multi_cite_14_4|>": "ss-971087", "<|multi_cite_14_5|>": "arxiv-117841", "<|cite_21|>": "ss-749221", "<|cite_15|>": "arxiv-99669", "<|cite_22|>": "arxiv-84365", "<|cite_23|>": "arxiv-87502", "<|cite_24|>": "arxiv-87210", "<|multi_cite_16_1|>": "ss-852913", "<|multi_cite_16_2|>": "arxiv-676327", "<|multi_cite_17_1|>": "ss-1033312", "<|multi_cite_17_2|>": "ss-1445062", "<|multi_cite_17_3|>": "arxiv-127174", "<|cite_18|>": "ss-1541317"} |
2408.03438 | <|paper_start|> Title: Enhanced Reverberation as Supervision for Unsupervised Speech Separation
Abstract: Enhanced Reverberation as Supervision for Unsupervised Speech Separation: Reverberation as supervision (RAS) is a framework that allows for training monaural speech separation models from multi-channel mixtures in an unsupervised manner. In RAS, models are trained so that sources predicted from a mixture at an input channel can be mapped to reconstruct a mixture at a target channel. However, stable unsupervised training has so far only been achieved in over-determined source-channel conditions, leaving the key determined case unsolved. This work proposes enhanced RAS (ERAS) for solving this problem. Through qualitative analysis, we found that stable training can be achieved by leveraging the loss term to alleviate the frequency-permutation problem. Separation performance is also boosted by adding a novel loss term where separated signals mapped back to their own input mixture are used as pseudo-targets for the signals separated from other channels and mapped to the same channel. Experimental results demonstrate high stability and performance of ERAS.
Introduction
\label{sec:introduction}
Speech separation has been intensively investigated for listening applications or as a front-end for applications such as automatic speech recognition <|cite_start|> (Reference: End-to-End Multi-Speaker Speech Recognition: Current advances in deep learning have resulted in a convergence of methods across a wide range of tasks, opening the door for tighter integration of modules that were previously developed and optimized in isolation. Recent ground-breaking works have produced end-to-end deep network methods for both speech separation and end-to-end automatic speech recognition (ASR). Speech separation methods such as deep clustering address the challenging cocktail-party problem of distinguishing multiple simultaneous speech signals. This is an enabling technology for real-world human machine interaction (HMI). However, speech separation requires ASR to interpret the speech for any HMI task. Likewise, ASR requires speech separation to work in an unconstrained environment. Although these two components can be trained in isolation and connected after the fact, this paradigm is likely to be sub-optimal, since it relies on artificially mixed data. In this paper, we develop the first fully end-to-end, jointly trained deep learning system for separation and recognition of overlapping speech signals. The joint training framework synergistically adapts the separation and recognition to each other. As an additional benefit, it enables training on more realistic data that contains only mixed signals and their transcriptions, and thus is suited to large scale training on existing transcribed data.) <|cite_end|> <|cite_start|> (Reference: Far-field automatic speech recognition: The machine recognition of speech spoken at a distance from the microphones, known as far-field automatic speech recognition (ASR), has received a significant increase in attention in science and industry, which caused or was caused by an equally significant improvement in recognition accuracy. Meanwhile, it has entered the consumer market with digital home assistants with a spoken language interface being its most prominent application. Speech recorded at a distance is affected by various acoustic distortions, and consequently, quite different processing pipelines have emerged compared with ASR for close-talk speech. A signal enhancement front end for dereverberation, source separation, and acoustic beamforming is employed to clean up the speech, and the back-end ASR engine is robustified by multicondition training and adaptation. We will also describe the so-called end-to-end approach to ASR, which is a new promising architecture that has recently been extended to the far-field scenario. This tutorial article gives an account of the algorithms used to enable accurate speech recognition from a distance, and it will be seen that, although deep learning has a significant share in the technological breakthroughs, a clever combination with traditional signal processing can lead to surprisingly effective solutions.) <|cite_end|> or speaker diarization <|cite_start|> (Reference: Tackling real noisy reverberant meetings with all-neural source separation, counting, and diarization system: Automatic meeting analysis is an essential fundamental technology required to let, e.g. smart devices follow and respond to our conversations. To achieve an optimal automatic meeting analysis, we previously proposed an all-neural approach that jointly solves source separation, speaker diarization and source counting problems in an optimal way (in a sense that all the 3 tasks can be jointly optimized through error back-propagation). It was shown that the method could well handle simulated clean (noiseless and anechoic) dialog-like data, and achieved very good performance in comparison with several conventional methods. However, it was not clear whether such all-neural approach would be successfully generalized to more complicated real meeting data containing more spontaneously-speaking speakers, severe noise and reverberation, and how it performs in comparison with the state-of-the-art systems in such scenarios. In this paper, we first consider practical issues required for improving the robustness of the all-neural approach, and then experimentally show that, even in real meeting scenarios, the all-neural approach can perform effective speech enhancement, and simultaneously outperform state-of-the-art systems.) <|cite_end|> <|cite_start|> (Reference: Integration of speech separation, diarization, and recognition for multi-speaker meetings: System description, comparison, and analysis: Multi-speaker speech recognition of unsegmented recordings has diverse applications such as meeting transcription and automatic subtitle generation. With technical advances in systems dealing with speech separation, speaker diarization, and automatic speech recognition (ASR) in the last decade, it has become possible to build pipelines that achieve reasonable error rates on this task. In this paper, we propose an end-to-end modular system for the LibriCSS meeting data, which combines independently trained separation, diarization, and recognition components, in that order. We study the effect of different state-of-the-art methods at each stage of the pipeline, and report results using task-specific metrics like SDR and DER, as well as downstream WER. Experiments indicate that the problem of overlapping speech for diarization and ASR can be effectively mitigated with the presence of a well-trained separation module. Our best system achieves a speaker-attributed WER of 12.7%, which is close to that of a non-overlapping ASR.) <|cite_end|>.
Pioneered by deep clustering <|cite_start|> (Reference: Deep clustering: Discriminative embeddings for segmentation and separation: We address the problem of acoustic source separation in a deep learning framework we call "deep clustering." Rather than directly estimating signals or masking functions, we train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data. Previous deep network approaches provide great advantages in terms of learning power and speed, but previously it has been unclear how to use them to separate signals in a class-independent way. In contrast, spectral clustering approaches are flexible with respect to the classes and number of items to be segmented, but it has been unclear how to leverage the learning power and speed of deep networks. To obtain the best of both worlds, we use an objective function that to train embeddings that yield a low-rank approximation to an ideal pairwise affinity matrix, in a class-independent way. This avoids the high cost of spectral factorization and instead produces compact clusters that are amenable to simple clustering methods. The segmentations are therefore implicitly encoded in the embeddings, and can be "decoded" by clustering. Preliminary experiments show that the proposed method can separate speech: when trained on spectrogram features containing mixtures of two speakers, and tested on mixtures of a held-out set of speakers, it can infer masking functions that improve signal quality by around 6dB. We show that the model can generalize to three-speaker mixtures despite training only on two-speaker mixtures. The framework can be used without class labels, and therefore has the potential to be trained on a diverse set of sound types, and to generalize to novel sources. We hope that future work will lead to segmentation of arbitrary sounds, with extensions to microphone array methods as well as image segmentation and other domains.) <|cite_end|> and permutation invariant training <|cite_start|> (Reference: Deep clustering: Discriminative embeddings for segmentation and separation: We address the problem of acoustic source separation in a deep learning framework we call "deep clustering." Rather than directly estimating signals or masking functions, we train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data. Previous deep network approaches provide great advantages in terms of learning power and speed, but previously it has been unclear how to use them to separate signals in a class-independent way. In contrast, spectral clustering approaches are flexible with respect to the classes and number of items to be segmented, but it has been unclear how to leverage the learning power and speed of deep networks. To obtain the best of both worlds, we use an objective function that to train embeddings that yield a low-rank approximation to an ideal pairwise affinity matrix, in a class-independent way. This avoids the high cost of spectral factorization and instead produces compact clusters that are amenable to simple clustering methods. The segmentations are therefore implicitly encoded in the embeddings, and can be "decoded" by clustering. Preliminary experiments show that the proposed method can separate speech: when trained on spectrogram features containing mixtures of two speakers, and tested on mixtures of a held-out set of speakers, it can infer masking functions that improve signal quality by around 6dB. We show that the model can generalize to three-speaker mixtures despite training only on two-speaker mixtures. The framework can be used without class labels, and therefore has the potential to be trained on a diverse set of sound types, and to generalize to novel sources. We hope that future work will lead to segmentation of arbitrary sounds, with extensions to microphone array methods as well as image segmentation and other domains.) <|cite_end|> <|cite_start|> (Reference: Permutation Invariant Training of Deep Models for Speaker-Independent Multi-talker Speech Separation: We propose a novel deep learning model, which supports permutation invariant training (PIT), for speaker independent multi-talker speech separation, commonly known as the cocktail-party problem. Different from most of the prior arts that treat speech separation as a multi-class regression problem and the deep clustering technique that considers it a segmentation (or clustering) problem, our model optimizes for the separation regression error, ignoring the order of mixing sources. This strategy cleverly solves the long-lasting label permutation problem that has prevented progress on deep learning based techniques for speech separation. Experiments on the equal-energy mixing setup of a Danish corpus confirms the effectiveness of PIT. We believe improvements built upon PIT can eventually solve the cocktail-party problem and enable real-world adoption of, e.g., automatic meeting transcription and multi-party human-computer interaction, where overlapping speech is common.) <|cite_end|>, neural network (NN)-based approaches have become a major technique to achieve high-fidelity separation <|cite_start|> (Reference: Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation: Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two- and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications.) <|cite_end|> <|cite_start|> (Reference: {TF-GridNet: Integrating full-and sub-band modeling for speech separation: We propose TF-GridNet for speech separation. The model is a novel deep neural network (DNN) integrating full- and sub-band modeling in the time-frequency (T-F) domain. It stacks several blocks, each consisting of an intra-frame full-band module, a sub-band temporal module, and a cross-frame self-attention module. It is trained to perform complex spectral mapping, where the real and imaginary (RI) components of input signals are stacked as features to predict target RI components. We first evaluate it on monaural anechoic speaker separation. Without using data augmentation and dynamic mixing, it obtains a state-of-the-art 23.5 dB improvement in scale-invariant signal-to-distortion ratio (SI-SDR) on WSJ0-2mix, a standard dataset for two-speaker separation. To show its robustness to noise and reverberation, we evaluate it on monaural reverberant speaker separation using the SMS-WSJ dataset and on noisy-reverberant speaker separation using WHAMR!, and obtain state-of-the-art performance on both datasets. We then extend TF-GridNet to multi-microphone conditions through multi-microphone complex spectral mapping, and integrate it into a two-DNN system with a beamformer in between (named as MISO-BF-MISO in earlier studies), where the beamformer proposed in this article is a novel multi-frame Wiener filter computed based on the outputs of the first DNN. State-of-the-art performance is obtained on the multi-channel tasks of SMS-WSJ and WHAMR!. Besides speaker separation, we apply the proposed algorithms to speech dereverberation and noisy-reverberant speech enhancement. State-of-the-art performance is obtained on a dereverberation dataset and on the dataset of the recent L3DAS22 multi-channel speech enhancement challenge.) <|cite_end|>.
Most NN-based methods are trained in a supervised manner and rely on synthetic data as it is hard to collect pairs of mixtures and their individual sources in real environments.
NNs are however known to be vulnerable to domain mismatch, and separation models trained on synthetic data often perform poorly in real environments <|cite_start|> (Reference: Closing the Gap Between Time-Domain Multi-Channel Speech Enhancement on Real and Simulation Conditions: The deep learning based time-domain models, e.g. Conv-TasNet, have shown great potential in both single-channel and multi-channel speech enhancement. However, many experiments on the time-domain speech enhancement model are done in simulated conditions, and it is not well studied whether the good performance can generalize to real-world scenarios. In this paper, we aim to provide an insightful investigation of applying multi-channel Conv-TasNet based speech enhancement to both simulation and real data. Our preliminary experiments show a large performance gap between the two conditions in terms of the ASR performance. Several approaches are applied to close this gap, including the integration of multi-channel Conv-TasNet into the beamforming model with various strategies, and the joint training of speech enhancement and speech recognition models. Our experiments on the CHiME-4 corpus show that our proposed approaches can greatly reduce the speech recognition performance discrepancy between simulation and real data, while preserving the strong speech enhancement capability in the frontend.) <|cite_end|>.
Unsupervised speech separation techniques that can leverage recorded unlabeled mixtures can be the key to success in real-world applications <|cite_start|> (Reference: Unsupervised training of a deep clustering model for multichannel blind source separation: We propose a training scheme to train neural network-based source separation algorithms from scratch when parallel clean data is unavailable. In particular, we demonstrate that an unsupervised spatial clustering algorithm is sufficient to guide the training of a deep clustering system. We argue that previous work on deep clustering requires strong supervision and elaborate on why this is a limitation. We demonstrate that (a) the single-channel deep clustering system trained according to the proposed scheme alone is able to achieve a similar performance as the multi-channel teacher in terms of word error rates and (b) initializing the spatial clustering approach with the deep clustering result yields a relative word error rate reduction of 26 % over the unsupervised teacher.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Training for Deep Speech Source Separation with Kullback-Leibler Divergence Based Probabilistic Loss Function: In this paper, we propose a multi-channel speech source separation with a deep neural network (DNN) which is trained under the condition that no clean signal is available. As an alternative to a clean signal, the proposed method adopts an estimated speech signal by an unsupervised speech source separation with a statistical model. As a statistical model of microphone input signal, we adopts a time-varying spatial covariance matrix (SCM) model which includes reverberation and background noise submodels so as to achieve robustness against reverberation and background noise. The DNN infers intermediate variables which are needed for constructing the time-varying SCM. Speech source separation is performed in a probabilistic manner so as to avoid overfitting to separation error. Since there are multiple intermediate variables, a loss function which evaluates a single intermediate variable is not applicable. Instead, the proposed method adopts a loss function which evaluates the output probabilistic signal directly based on Kullback-Leibler Divergence (KLD). Gradient of the loss function can be back-propagated into the DNN through all the intermediate variables. Experimental results under reverberant conditions show that the proposed method can train the DNN efficiently even when the number of training utterances is small, i.e., 1K.) <|cite_end|> <|cite_start|> (Reference: Remix-cycle-consistent Learning on Adversarially Learned Separator for Accurate and Stable Unsupervised Speech Separation: A new learning algorithm for speech separation networks is designed to explicitly reduce residual noise and artifacts in the separated signal in an unsupervised manner. Generative adversarial networks are known to be effective in constructing separation networks when the ground truth for the observed signal is inaccessible. Still, weak objectives aimed at distribution-to-distribution mapping make the learning unstable and limit their performance. This study introduces the remix-cycle-consistency loss as a more appropriate objective function and uses it to fine-tune adversarially learned source separation models. The remix-cycle-consistency loss is defined as the difference between the mixed speech observed at microphones and the pseudo-mixed speech obtained by alternating the process of separating the mixed sound and remixing its outputs with another combination. The minimization of this loss leads to an explicit reduction in the distortions in the output of the separation network. Experimental comparisons with multichannel speech separation demonstrated that the proposed method achieved high separation accuracy and learning stability comparable to supervised learning.) <|cite_end|> <|cite_start|> (Reference: Unsupervised training of sequential neural beamformer using coarsely-separated and non-separated signals: We present an unsupervised training method of the sequential neural beamformer (Seq-BF) using coarsely-separated and non-separated supervisory signals. The signal coarsely separated by blind source separation (BSS) has been used for training neural separators in an unsupervised manner. However, the performance is limited due to distortions in the supervision. In contrast, remix-cycle-consistent learning (RCCL) enables a separator to be trained on distortion-free observed mixtures by making the remixed mixtures obtained by repeatedly separating and remixing the two different mixtures closer to the original mixtures. Still, training with RCCL from scratch often falls into a trivial solution, i.e., not separating signals. The present study provides a novel unsupervised learning algorithm for the Seq-BF with two stacked neural separators, in which the separators are pre-trained using the BSS outputs and then fine-tuned with RCCL. Such configuration compensates for the shortcomings of both approaches: the guiding mechanism in Seq-BF accelerates separation to exceed BSS performance, thereby stabilizing RCCL. Experimental comparisons demonstrated that the proposed unsupervised learning achieved performance comparable to supervised learning (0.4 point difference in word error rate).) <|cite_end|> <|cite_start|> (Reference: Training Data Generation with DOA-based Selecting and Remixing for Unsupervised Training of Deep Separation Models: ) <|cite_end|> <|cite_start|> (Reference: Spatial Loss for Unsupervised Multi-channel Source Separation: We propose a spatial loss for unsupervised multi-channel source separation. The proposed loss exploits the duality of direction of arrival (DOA) and beamforming: the steering and beamforming vectors should be aligned for the target source, but orthogonal for interfering ones. The spatial loss encour-ages consistency between the mixing and demixing systems from a classic DOA estimator and a neural separator, respec-tively. With the proposed loss, we train the neural separators based on minimum variance distortionless response (MVDR) beamforming and independent vector analysis (IVA). We also investigate the effectiveness of combining our spatial loss and a signal loss , which uses the outputs of blind source separation as the references. We evaluate our proposed method on synthetic and recorded (LibriCSS) mixtures. We find that the spatial loss is most effective to train IVA-based separators. For the neural MVDR beamformer, it performs best when combined with a signal loss. On synthetic mixtures, the proposed unsupervised loss leads to the same performance as a supervised loss in terms of word error rate. On LibriCSS, we obtain close to state-of-the-art performance without any labeled training data.) <|cite_end|> <|cite_start|> (Reference: Neural full-rank spatial covariance analysis for blind source separation: This paper describes aneural blind source separation (BSS) method based on amortized variational inference (AVI) of a non-linear generative model of mixture signals. A classical statistical approach to BSS is to fit a linear generative model that consists of spatial and source models representing the inter-channel covariances and power spectral densities of sources, respectively. Although the variational autoencoder (VAE) has successfully been used as a non-linear source model with latent features, it should be pretrained from a sufficient amount of isolated signals. Our method, in contrast, enables the VAE-based source model to be trained only from mixture signals. Specifically, we introduce a neural mixture-to-feature inference model that directly infers the latent features from the observed mixture and integrate it with a neural feature-to-mixture generative model consisting of a full-rank spatial model and a VAE-based source model. All the models are optimized jointly such that the likelihood for the training mixtures is maximized in the framework of AVI. Once the inference model is optimized, it can be used for estimating the latent features of sources included in unseen mixture signals. The experimental results show that the proposed method outperformed the state-of-the-art BSS methods based on linear generative models and was comparable to a method based on supervised learning of the VAE-based sourcemodel.) <|cite_end|> <|cite_start|> (Reference: Neural Fast Full-Rank Spatial Covariance Analysis for Blind Source Separation: This paper describes an efficient unsupervised learning method for a neural source separation model that utilizes a probabilistic generative model of observed multichannel mixtures proposed for blind source separation (BSS). For this purpose, amortized variational inference (AVI) has been used for directly solving the inverse problem of BSS with full-rank spatial covariance analysis (FCA). Although this unsupervised technique called neural FCA is in principle free from the domain mismatch problem, it is computationally demanding due to the full rankness of the spatial model in exchange for robustness against relatively short reverberations. To reduce the model complexity without sacrificing performance, we propose neural FastFCA based on the jointly-diagonalizable yet full-rank spatial model. Our neural separation model introduced for AVI alternately performs neural network blocks and single steps of an efficient iterative algorithm called iterative source steering. This alternating architecture enables the separation model to quickly separate the mixture spectrogram by leveraging both the deep neural network and the multichannel optimization algorithm. The training objective with AVI is derived to maximize the marginalized likelihood of the observed mixtures. The experiment using mixture signals of two to four sound sources shows that neural FastFCA outperforms conventional BSS methods and reduces the computational time to about 2% of that for the neural FCA.) <|cite_end|>.
We focus here on developing such a training technique for monaural separation.
Recently, MixIT <|cite_start|> (Reference: Unsupervised Sound Separation Using Mixture Invariant Training: In recent years, rapid progress has been made on the problem of single-channel sound separation using supervised training of deep neural networks. In such supervised approaches, a model is trained to predict the component sources from synthetic mixtures created by adding up isolated ground-truth sources. Reliance on this synthetic training data is problematic because good performance depends upon the degree of match between the training data and real-world audio, especially in terms of the acoustic conditions and distribution of sources. The acoustic properties can be challenging to accurately simulate, and the distribution of sound types may be hard to replicate. In this paper, we propose a completely unsupervised method, mixture invariant training (MixIT), that requires only single-channel acoustic mixtures. In MixIT, training examples are constructed by mixing together existing mixtures, and the model separates them into a variable number of latent sources, such that the separated sources can be remixed to approximate the original mixtures. We show that MixIT can achieve competitive performance compared to supervised methods on speech separation. Using MixIT in a semi-supervised learning setting enables unsupervised domain adaptation and learning from large amounts of real world data without ground-truth source waveforms. In particular, we significantly improve reverberant speech separation performance by incorporating reverberant mixtures, train a speech enhancement system from noisy mixtures, and improve universal sound separation by incorporating a large amount of in-the-wild data.) <|cite_end|> and remixing-based methods <|cite_start|> (Reference: RemixIT: Continual self-training of speech enhancement models via bootstrapped remixing: We present RemixIT, a simple yet effective self-supervised method for training speech enhancement without the need of a single isolated in-domain speech nor a noise waveform. Our approach overcomes limitations of previous methods which make them dependent on clean in-domain target signals and thus, sensitive to any domain mismatch between train and test samples. RemixIT is based on a continuous self-training scheme in which a pre-trained teacher model on out-of-domain data infers estimated pseudo-target signals for in-domain mixtures. Then, by permuting the estimated clean and noise signals and remixing them together, we generate a new set of bootstrapped mixtures and corresponding pseudo-targets which are used to train the student network. Vice-versa, the teacher periodically refines its estimates using the updated parameters of the latest student models. Experimental results on multiple speech enhancement datasets and tasks not only show the superiority of our method over prior approaches but also showcase that RemixIT can be combined with any separation model as well as be applied towards any semi-supervised and unsupervised domain adaptation task. Our analysis, paired with empirical evidence, sheds light on the inside functioning of our self-training scheme wherein the student model keeps obtaining better performance while observing severely degraded pseudo-targets.) <|cite_end|> <|cite_start|> (Reference: Self-Remixing: Unsupervised Speech Separation via Separation and Remixing: We present Self-Remixing, a novel self-supervised speech separation method, which refines a pre-trained separation model in an unsupervised manner. The proposed method consists of a shuffler module and a solver module, and they grow together through separation and remixing processes. Specifically, the shuffler first separates observed mixtures and makes pseudo-mixtures by shuffling and remixing the separated signals. The solver then separates the pseudo-mixtures and remixes the separated signals back to the observed mixtures. The solver is trained using the observed mixtures as supervision, while the shuffler's weights are updated by taking the moving average with the solver's, generating the pseudo-mixtures with fewer distortions. Our experiments demonstrate that Self-Remixing gives better performance over existing remixing-based self-supervised methods with the same or less training costs under unsupervised setup. Self-Remixing also outperforms baselines in semi-supervised domain adaptation, showing effectiveness in multiple setups.) <|cite_end|> <|cite_start|> (Reference: Remixing-based Unsupervised Source Separation from Scratch: We propose an unsupervised approach for training separation models from scratch using RemixIT and Self-Remixing, which are recently proposed self-supervised learning methods for refining pre-trained models. They first separate mixtures with a teacher model and create pseudo-mixtures by shuffling and remixing the separated signals. A student model is then trained to separate the pseudo-mixtures using either the teacher's outputs or the initial mixtures as supervision. To refine the teacher's outputs, the teacher's weights are updated with the student's weights. While these methods originally assumed that the teacher is pre-trained, we show that they are capable of training models from scratch. We also introduce a simple remixing method to stabilize training. Experimental results demonstrate that the proposed approach outperforms mixture invariant training, which is currently the only available approach for training a monaural separation model from scratch.) <|cite_end|> have shown great success for unsupervised separation.
They artificially create mixtures-of-mixtures or remix pseudo-mixtures to achieve unsupervised learning, but such artificial mixtures cause another kind of domain mismatch against the normal mixtures seen at inference.
Another direction is exploiting \textit{multi-channel} mixtures to train a monaural separation model.
Prior work has utilized spatial cues as pseudo-targets <|cite_start|> (Reference: Unsupervised Deep Clustering for Source Separation: Direct Learning from Mixtures using Spatial Information: We present a monophonic source separation system that is trained by only observing mixtures with no ground truth separation information. We use a deep clustering approach which trains on multi-channel mixtures and learns to project spectrogram bins to source clusters that correlate with various spatial features. We show that using such a training process we can obtain separation performance that is as good as making use of ground truth separation information. Once trained, this system is capable of performing sound separation on monophonic inputs, despite having learned how to do so using multi-channel recordings.) <|cite_end|> <|cite_start|> (Reference: Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures: Separating an audio scene into isolated sources is a fundamental problem in computer audition, analogous to image segmentation in visual scene analysis. Source separation systems based on deep learning are currently the most successful approaches for solving the underdetermined separation problem, where there are more sources than channels. Traditionally, such systems are trained on sound mixtures where the ground truth decomposition is already known. Since most real-world recordings do not have such a decomposition available, this limits the range of mixtures one can train on, and the range of mixtures the learned models may successfully separate. In this work, we use a simple blind spatial source separation algorithm to generate estimated decompositions of stereo mixtures. These estimates, together with a weighting scheme in the time-frequency domain, based on confidence in the separation quality, are used to train a deep learning model that can be used for single-channel separation, where no source direction information is available. This demonstrates how a simple cue such as the direction of origin of source can be used to bootstrap a model for source separation that can be used in situations where that cue is not available.) <|cite_end|>, avoiding the domain mismatch issue altogether.
However, performance is bounded by the pseudo-targets' quality.
Reverberation as supervision (RAS) was proposed for effectively leveraging multi-channel mixtures to train monaural separation models <|cite_start|> (Reference: Reverberation as Supervision for Speech Separation: This paper proposes reverberation as supervision (RAS), a novel unsupervised loss function for single-channel reverberant speech separation. Prior methods for unsupervised separation required the synthesis of mixtures of mixtures or assumed the existence of a teacher model, making them difficult to consider as potential methods explaining the emergence of separation abilities in an animal's auditory system. We assume the availability of two-channel mixtures at training time, and train a neural network to separate the sources given one of the channels as input such that the other channel may be predicted from the separated sources. As the relationship between the room impulse responses (RIRs) of each channel depends on the locations of the sources, which are unknown to the network, the network cannot rely on learning that relationship. Instead, our proposed loss function fits each of the separated sources to the mixture in the target channel via Wiener filtering, and compares the resulting mixture to the ground-truth one. We show that minimizing the scale-invariant signal-to-distortion ratio (SI-SDR) of the predicted right-channel mixture with respect to the ground truth implicitly guides the network towards separating the left-channel sources. On a semi-supervised reverberant speech separation task based on the WHAMR! dataset, using training data where just 5% (resp., 10%) of the mixtures are labeled with associated isolated sources, we achieve 70% (resp., 78%) of the SI-SDR improvement obtained when training with supervision on the full training set, while a model trained only on the labeled data obtains 43% (resp., 45%).) <|cite_end|>.
In RAS, separated signals from an input channel are mapped to another target channel by relative room impulse response (RIR) estimation (e.g., Wiener filtering), and the model is trained to reconstruct the mixture at the target channel.
The idea is that recovering the mixture by mapping from well-separated signals is much easier than from the mixture itself, so the model will learn to separate the sources to reconstruct the mixture well.
Since both input and target are mixtures, RAS ideally overcomes both domain-mismatch and pseudo-target-quality problems.
In the original RAS <|cite_start|> (Reference: Reverberation as Supervision for Speech Separation: This paper proposes reverberation as supervision (RAS), a novel unsupervised loss function for single-channel reverberant speech separation. Prior methods for unsupervised separation required the synthesis of mixtures of mixtures or assumed the existence of a teacher model, making them difficult to consider as potential methods explaining the emergence of separation abilities in an animal's auditory system. We assume the availability of two-channel mixtures at training time, and train a neural network to separate the sources given one of the channels as input such that the other channel may be predicted from the separated sources. As the relationship between the room impulse responses (RIRs) of each channel depends on the locations of the sources, which are unknown to the network, the network cannot rely on learning that relationship. Instead, our proposed loss function fits each of the separated sources to the mixture in the target channel via Wiener filtering, and compares the resulting mixture to the ground-truth one. We show that minimizing the scale-invariant signal-to-distortion ratio (SI-SDR) of the predicted right-channel mixture with respect to the ground truth implicitly guides the network towards separating the left-channel sources. On a semi-supervised reverberant speech separation task based on the WHAMR! dataset, using training data where just 5% (resp., 10%) of the mixtures are labeled with associated isolated sources, we achieve 70% (resp., 78%) of the SI-SDR improvement obtained when training with supervision on the full training set, while a model trained only on the labeled data obtains 43% (resp., 45%).) <|cite_end|>, however, unsupervised learning of a two-speaker separation model using two-speaker mixtures failed, which implies that there are undesirable solutions where the model outputs signals that are not well separated but from which it is easy to recover the mixture.
Unsupervised neural speech separation leveraging over-determined mixtures (UNSSOR) has shown that we can avoid such undesirable solutions when we have more channels than sources <|cite_start|> (Reference: UNSSOR: Unsupervised Neural Speech Separation by Leveraging Over-determined Training Mixtures: In reverberant conditions with multiple concurrent speakers, each microphone acquires a mixture signal of multiple speakers at a different location. In over-determined conditions where the microphones out-number speakers, we can narrow down the solutions to speaker images and realize unsupervised speech separation by leveraging each mixture signal as a constraint (i.e., the estimated speaker images at a microphone should add up to the mixture). Equipped with this insight, we propose UNSSOR, an algorithm for $\textbf{u}$nsupervised $\textbf{n}$eural $\textbf{s}$peech $\textbf{s}$eparation by leveraging $\textbf{o}$ver-determined training mixtu$\textbf{r}$es. At each training step, we feed an input mixture to a deep neural network (DNN) to produce an intermediate estimate for each speaker, linearly filter the estimates, and optimize a loss so that, at each microphone, the filtered estimates of all the speakers can add up to the mixture to satisfy the above constraint. We show that this loss can promote unsupervised separation of speakers. The linear filters are computed in each sub-band based on the mixture and DNN estimates through the forward convolutive prediction (FCP) algorithm. To address the frequency permutation problem incurred by using sub-band FCP, a loss term based on minimizing intra-source magnitude scattering is proposed. Although UNSSOR requires over-determined training mixtures, we can train DNNs to achieve under-determined separation (e.g., unsupervised monaural speech separation). Evaluation results on two-speaker separation in reverberant conditions show the effectiveness and potential of UNSSOR.) <|cite_end|>.
Intuitively, more constraints are imposed on the model outputs by using more microphones, because the model has to estimate signals from which the mixtures at all the microphones can be reconstructed.
Still, fully-unsupervised RAS training in the determined condition remains unsolved.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=\linewidth]{figures/ras_overview.pdf}}
\vspace{-1.0mm}
\caption{
Overview of ERAS training.
Separated signals at the left (L) or right (R) channel are mapped to the opposite channel by relative RIR estimation, and the model is trained to reconstruct mixtures as the sum of the mapped sources (RAS loss).
ERAS improves training stability by strongly penalize undesirable solution by ISMS loss and boosts performance by introducing an inter-channel consistency (ICC) loss aiming to make sources mapped to the same channel closer.
}
\label{fig:overview}
\vspace{-4mm}
\end{figure}
In this paper, we tackle unsupervised RAS training in the determined setup, particularly training a monaural two-speaker separation model using two-channel mixtures.
We have discovered that sources which are not well separated but are frequency-permuted result in a undesirable solution with respect to RAS loss, but such a solution can be avoided by leveraging the loss to alleviate the frequency-permutation problem <|cite_start|> (Reference: UNSSOR: Unsupervised Neural Speech Separation by Leveraging Over-determined Training Mixtures: In reverberant conditions with multiple concurrent speakers, each microphone acquires a mixture signal of multiple speakers at a different location. In over-determined conditions where the microphones out-number speakers, we can narrow down the solutions to speaker images and realize unsupervised speech separation by leveraging each mixture signal as a constraint (i.e., the estimated speaker images at a microphone should add up to the mixture). Equipped with this insight, we propose UNSSOR, an algorithm for $\textbf{u}$nsupervised $\textbf{n}$eural $\textbf{s}$peech $\textbf{s}$eparation by leveraging $\textbf{o}$ver-determined training mixtu$\textbf{r}$es. At each training step, we feed an input mixture to a deep neural network (DNN) to produce an intermediate estimate for each speaker, linearly filter the estimates, and optimize a loss so that, at each microphone, the filtered estimates of all the speakers can add up to the mixture to satisfy the above constraint. We show that this loss can promote unsupervised separation of speakers. The linear filters are computed in each sub-band based on the mixture and DNN estimates through the forward convolutive prediction (FCP) algorithm. To address the frequency permutation problem incurred by using sub-band FCP, a loss term based on minimizing intra-source magnitude scattering is proposed. Although UNSSOR requires over-determined training mixtures, we can train DNNs to achieve under-determined separation (e.g., unsupervised monaural speech separation). Evaluation results on two-speaker separation in reverberant conditions show the effectiveness and potential of UNSSOR.) <|cite_end|>.
To boost performance, we introduce an inter-channel consistency loss in which separated signals mapped back to their input mixture are used as pseudo-targets for those separated from other channels and mapped to the same channel.
Our observation is that the former signals have higher quality and thus improve the quality of the latter.
By further introducing an effective two-stage training strategy, the proposed method, called enhanced RAS (ERAS), achieves both stable training and high separation performance.
\vspace{-.1cm} <|paper_end|> | [
"<|reference_start|> Far-field automatic speech recognition: The machine recognition of speech spoken at a distance from the microphones, known as far-field automatic speech recognition (ASR), has received a significant increase in attention in science and industry, which caused or was caused by an equally significant improvement in recognition accuracy. Meanwhile, it has entered the consumer market with digital home assistants with a spoken language interface being its most prominent application. Speech recorded at a distance is affected by various acoustic distortions, and consequently, quite different processing pipelines have emerged compared with ASR for close-talk speech. A signal enhancement front end for dereverberation, source separation, and acoustic beamforming is employed to clean up the speech, and the back-end ASR engine is robustified by multicondition training and adaptation. We will also describe the so-called end-to-end approach to ASR, which is a new promising architecture that has recently been extended to the far-field scenario. This tutorial article gives an account of the algorithms used to enable accurate speech recognition from a distance, and it will be seen that, although deep learning has a significant share in the technological breakthroughs, a clever combination with traditional signal processing can lead to surprisingly effective solutions. <|reference_end|>",
"<|reference_start|> Training Data Generation with DOA-based Selecting and Remixing for Unsupervised Training of Deep Separation Models: <|reference_end|>",
"<|reference_start|> Neural full-rank spatial covariance analysis for blind source separation: This paper describes aneural blind source separation (BSS) method based on amortized variational inference (AVI) of a non-linear generative model of mixture signals. A classical statistical approach to BSS is to fit a linear generative model that consists of spatial and source models representing the inter-channel covariances and power spectral densities of sources, respectively. Although the variational autoencoder (VAE) has successfully been used as a non-linear source model with latent features, it should be pretrained from a sufficient amount of isolated signals. Our method, in contrast, enables the VAE-based source model to be trained only from mixture signals. Specifically, we introduce a neural mixture-to-feature inference model that directly infers the latent features from the observed mixture and integrate it with a neural feature-to-mixture generative model consisting of a full-rank spatial model and a VAE-based source model. All the models are optimized jointly such that the likelihood for the training mixtures is maximized in the framework of AVI. Once the inference model is optimized, it can be used for estimating the latent features of sources included in unseen mixture signals. The experimental results show that the proposed method outperformed the state-of-the-art BSS methods based on linear generative models and was comparable to a method based on supervised learning of the VAE-based sourcemodel. <|reference_end|>",
"<|reference_start|> Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures: Separating an audio scene into isolated sources is a fundamental problem in computer audition, analogous to image segmentation in visual scene analysis. Source separation systems based on deep learning are currently the most successful approaches for solving the underdetermined separation problem, where there are more sources than channels. Traditionally, such systems are trained on sound mixtures where the ground truth decomposition is already known. Since most real-world recordings do not have such a decomposition available, this limits the range of mixtures one can train on, and the range of mixtures the learned models may successfully separate. In this work, we use a simple blind spatial source separation algorithm to generate estimated decompositions of stereo mixtures. These estimates, together with a weighting scheme in the time-frequency domain, based on confidence in the separation quality, are used to train a deep learning model that can be used for single-channel separation, where no source direction information is available. This demonstrates how a simple cue such as the direction of origin of source can be used to bootstrap a model for source separation that can be used in situations where that cue is not available. <|reference_end|>"
] | [
1,
14,
16,
23
] | {"<|multi_cite_1_1|>": "ss-1516760", "<|multi_cite_1_2|>": "ss-1530287", "<|multi_cite_2_1|>": "arxiv-252715", "<|multi_cite_2_2|>": "arxiv-301349", "<|cite_3|>": "arxiv-82662", "<|multi_cite_4_1|>": "arxiv-82662", "<|multi_cite_4_2|>": "arxiv-101316", "<|multi_cite_5_1|>": "arxiv-173344", "<|multi_cite_5_2|>": "ss-1846583", "<|cite_6|>": "arxiv-377122", "<|multi_cite_7_1|>": "arxiv-197843", "<|multi_cite_7_2|>": "arxiv-233497", "<|multi_cite_7_3|>": "arxiv-408540", "<|multi_cite_7_4|>": "ss-2445119", "<|multi_cite_7_5|>": "ss-2445120", "<|multi_cite_7_6|>": "ss-1473229", "<|multi_cite_7_7|>": "ss-1518214", "<|multi_cite_7_8|>": "arxiv-516607", "<|cite_8|>": "arxiv-273791", "<|multi_cite_9_1|>": "arxiv-399917", "<|multi_cite_9_2|>": "arxiv-463154", "<|multi_cite_9_3|>": "arxiv-535865", "<|multi_cite_10_1|>": "arxiv-178964", "<|multi_cite_10_2|>": "arxiv-179164", "<|cite_11|>": "arxiv-462333", "<|cite_12|>": "arxiv-462333", "<|cite_13|>": "arxiv-511296", "<|cite_14|>": "arxiv-511296"} |
1101.0562 | <|paper_start|> Title: Buffer Sizing for 80211 Based Networks
Abstract: Buffer Sizing for 80211 Based Networks: We consider the sizing of network buffers in 802.11 based networks. Wireless networks face a number of fundamental issues that do not arise in wired networks. We demonstrate that the use of fixed size buffers in 802.11 networks inevitably leads to either undesirable channel under-utilization or unnecessary high delays. We present two novel dynamic buffer sizing algorithms that achieve high throughput while maintaining low delay across a wide range of network conditions. Experimental measurements demonstrate the utility of the proposed algorithms in a production WLAN and a lab testbed.
Introduction
\label{sec_intr}
In communication networks, buffers are used to accommodate short-term packet bursts so as
to mitigate packet drops and to maintain high link efficiency. Packets are queued if too
many packets arrive in a sufficiently short interval of time during which a network
device lacks the capacity to process all of them immediately.
For wired routers, the sizing of buffers is an active research topic
( <|cite_start|> (Reference: High Performance TCP in ANSNET: This report concentrates on specific requirements and goals of the research networks supported by ANSNET, but applies to any TCP dominated high speed WAN and in particular those striving to support high speed end-to-end flows. Measurements have been made under conditions intended to better understand performance barriers imposed by network equipment queueing capacities and queue drop strategies.The IBM RS/6000 based routers currently supporting ANSNET performed very well in these tests. Measurements have been made with the current software and performance enhanced software. Single TCP flows are able to achieve 40 Mb/s and competing multiple TCP flows achieve over 41 Mb/s link utilization on 44.7 Mb/s DS3 links with delays comparable to US cross continent ANSNET delays. Congestion collapse is demonstrated with intentionally reduced queueing capacity and using window sizes much larger than optimal.A variation of Floyd and Jacobson's Random Early Detection (RED) algorithm [1] is tested. Performance improved with the use of RED for tests involving multiple flows. With RED and queueing capacity at or above the delay bandwidth product, congestion collapse is avoided, allowing the maximum window size to safely be set arbitrarily high.Queueing capacity greater than or equal to the delay bandwidth product and RED are recommended. RED provides performance improvement in all but the single flow case, but cannot substitute for adequate queueing capacity, particularly if high speed flows are to be supported.) <|cite_end|> <|cite_start|> (Reference: Sizing Router Buffers: All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.) <|cite_end|> <|cite_start|> (Reference: Adaptive Tuning of Drop-Tail Buffers for Reducing Queueing Delays: Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queuing delays. We propose a simple algorithm, active drop-tail (ADT), which regulates the queue size, based on prevailing traffic conditions, to a minimum size that still allows for a desired (high) level of utilization. Packet level ns2 simulations are provided to show that adaptive drop-tail achieves significantly smaller queues than current approaches at the expense of 1-2% of the link utilization.) <|cite_end|> <|cite_start|> (Reference: A Critique of Recently Proposed Buffer-Sizing Strategies: Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queueing delays. Recently, several papers have suggested that it may, under general circumstances, be possible to achieve high utilisation with small network buffers. In this paper we review these recommendations. A number of issues are reported that question the utility of these recommendations.) <|cite_end|> <|cite_start|> (Reference: Open Issues in Router Buffer Sizing: Recent research results suggest that the buffers of router interfaces can be made very small, much less than the link's and width-delay product, without causing a utilization loss, as long as the link carries many TCP flows. In this letter we raise some concerns about the previous recommendation. We show that the use of such small buffers can lead to excessively high loss rates (up to 5%-15%in our simulations) in congested access links that carry many flows. Even if the link is fully utilized, small buffers lead to lower throughput for most large TCP flows, and significant variability in the per-flow throughput and transfer latency. We also discuss some important issues in router buffer sizing that are often ignored) <|cite_end|>). The classical rule of thumb for
sizing wired buffers is to set buffer sizes to be the product of the \emph{bandwidth} and the average \emph{delay} of the flows utilizing this link, namely
the {\em Bandwidth-Delay Product} (BDP) rule <|cite_start|> (Reference: High Performance TCP in ANSNET: This report concentrates on specific requirements and goals of the research networks supported by ANSNET, but applies to any TCP dominated high speed WAN and in particular those striving to support high speed end-to-end flows. Measurements have been made under conditions intended to better understand performance barriers imposed by network equipment queueing capacities and queue drop strategies.The IBM RS/6000 based routers currently supporting ANSNET performed very well in these tests. Measurements have been made with the current software and performance enhanced software. Single TCP flows are able to achieve 40 Mb/s and competing multiple TCP flows achieve over 41 Mb/s link utilization on 44.7 Mb/s DS3 links with delays comparable to US cross continent ANSNET delays. Congestion collapse is demonstrated with intentionally reduced queueing capacity and using window sizes much larger than optimal.A variation of Floyd and Jacobson's Random Early Detection (RED) algorithm [1] is tested. Performance improved with the use of RED for tests involving multiple flows. With RED and queueing capacity at or above the delay bandwidth product, congestion collapse is avoided, allowing the maximum window size to safely be set arbitrarily high.Queueing capacity greater than or equal to the delay bandwidth product and RED are recommended. RED provides performance improvement in all but the single flow case, but cannot substitute for adequate queueing capacity, particularly if high speed flows are to be supported.) <|cite_end|>. See Section
\ref{sec_related_work} for discussion of other related work.
Surprisingly, however the sizing of buffers in wireless networks (especially those based
on 802.11/802.11e) appears to have received very little attention within the networking
community. Exceptions include the recent work in <|cite_start|> (Reference: On Buffer Sizing for Voice in 802.11 WLANs: The use of 802.11 to transport delay sensitive traffic is becoming increasingly common. This raises the question of the tradeoff between buffering delay and loss in 802.11 networks. We find that there exists a sharp transition from the low-loss, low-delay regime to high-loss, high-delay operation. Given modest buffering at the access point, this transition determines the voice capacity of a WLAN and its location is largely insensitive to the buffer size used.) <|cite_end|>
relating to buffer sizing for voice traffic in 802.11e <|cite_start|> (Reference: Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: ) <|cite_end|> WLANs, work in <|cite_start|> (Reference: Understanding TCP fairness over Wireless LAN: As the use of mobile devices increase with lowering cost, miniaturization and proliferation of internet and coming of internet access technologies and their commercialization we are going to see and increasing amount of internet traffic coming and going through wireless networks. Therefore the study of the effects of peculiarities of wireless network on fairness of bandwidth allocation is gaining momentum and has some important contributions towards imp rovement of such services on mobile devices In this report we tackle the issue of bandwidth sharing in a Wireless LAN situation with many upstream and downstream flows of TCP data traffic. We analyze the present implementation of medium access standard 802.11, and identify causes and effects of unfairness in its working while dealing with TCP flow. Then we discuss some proposed solutions to imp rove upon or rectify or alleviate from the situation and see their effectiveness. Lastly we view the entire results through numerous simulations and test results.) <|cite_end|> which considers the impact of buffer sizing on TCP
upload/download fairness, and work in <|cite_start|> (Reference: Impact of 802.11e EDCA on mixed TCP-based applications: There has been an explosive growth in the use of wireless LANs (WLANs) to support network applications ranging from web-browsing and file-sharing to voice calls. It is difficult to optimally configure WLAN components, such as access points (APs), to meet the quality-of-service requirements of the different applications, as well as ensuring flow-level fairness. Recent work has shown that the widely-deployed IEEE 802.11 MAC Distributed Coordination Function (DCF) is biased against downstream flows. The new IEEE 802.11e standard introduces QoS mechanisms, such as Enhanced Distributed Channel Access (EDCA), that allow this unfairness to be addressed. So far, only limited work has been done to evaluate the impact of these MAC protocols on TCP-based applications. In this paper, through ns-2 simulations, we evaluate the impact of EDCA on TCP application traffic consisting of both long and short-lived TCP flows. We find that the performance of TCP applications is very dependent upon the settings of the EDCA parameters and buffer lengths at the AP. We also show that the performance of the admission control strategy employed depends on the buffer lengths at the AP and the traffic intensity.) <|cite_end|> which is related to
Related Work
\label{sec_related_work}
The classical approach to sizing Internet router buffers is the BDP rule proposed in <|cite_start|> (Reference: High Performance TCP in ANSNET: This report concentrates on specific requirements and goals of the research networks supported by ANSNET, but applies to any TCP dominated high speed WAN and in particular those striving to support high speed end-to-end flows. Measurements have been made under conditions intended to better understand performance barriers imposed by network equipment queueing capacities and queue drop strategies.The IBM RS/6000 based routers currently supporting ANSNET performed very well in these tests. Measurements have been made with the current software and performance enhanced software. Single TCP flows are able to achieve 40 Mb/s and competing multiple TCP flows achieve over 41 Mb/s link utilization on 44.7 Mb/s DS3 links with delays comparable to US cross continent ANSNET delays. Congestion collapse is demonstrated with intentionally reduced queueing capacity and using window sizes much larger than optimal.A variation of Floyd and Jacobson's Random Early Detection (RED) algorithm [1] is tested. Performance improved with the use of RED for tests involving multiple flows. With RED and queueing capacity at or above the delay bandwidth product, congestion collapse is avoided, allowing the maximum window size to safely be set arbitrarily high.Queueing capacity greater than or equal to the delay bandwidth product and RED are recommended. RED provides performance improvement in all but the single flow case, but cannot substitute for adequate queueing capacity, particularly if high speed flows are to be supported.) <|cite_end|>. Recently, in <|cite_start|> (Reference: Sizing Router Buffers: All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.) <|cite_end|> it is argued
that the BDP rule may be overly conservative on links shared by a large number of flows.
In this case it is unlikely that TCP congestion window sizes (cwnd) evolve synchronously
and due to statistical multiplexing of cwnd backoff, the combined buffer requirement can
be considerably less than the BDP. The analysis in <|cite_start|> (Reference: Sizing Router Buffers: All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.) <|cite_end|>
suggests that it may be sufficient to size buffers as $BDP/\sqrt{n}$. This work is
extended in <|cite_start|> (Reference: Buffer sizes for large multiplexers: TCP queueing theory and instability analysis: In large multiplexers with many TCP flows, the aggregate traffic flow behaves predictably; this is a basis for the fluid model of Misra, Gong and Towsley V. Misra et al., (2000) and for a growing literature on fluid models of congestion control. In this paper we argue that different fluid models arise from different buffer-sizing regimes. We consider the large buffer regime (buffer size is bandwidth-delay product), an intermediate regime (divide the large buffer size by the square root of the number of flows), and the small buffer regime (buffer size does not depend on number of flows). Our arguments use various techniques from queueing theory. We study the behaviour of these fluid models (on a single bottleneck Kink, for a collection of identical long-lived flows). For what parameter regimes is the fluid model stable, and when it is unstable what is the size of oscillations and the impact on goodput? Our analysis uses an extension of the Poincare-Linstedt method to delay-differential equations. We find that large buffers with drop-tail have much the same performance as intermediate buffers with either drop-tail or AQM; that large buffers with RED are better at least for window sizes less than 20 packets; and that small buffers with either drop-tail or AQM are best over a wide range of window sizes, though the buffer size must be chosen carefully. This suggests that buffer sizes should be much much smaller than is currently recommended.) <|cite_end|>, <|cite_start|> (Reference: Routers with Very Small Buffers: Internet routers require buffers to hold pack- ets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewer than twenty packet buffers are sufficient for high throughput. Specifically, we argue that O(log W ) buffers are sufficient, where W is the window size of each flow. We support our claim with analysis and a variety of simulations. The change we need to make to TCP is minimal—each sender just needs to pace packet injections from its window. Moreover, there is some evidence that such small buffers are sufficient even if we don't modify the TCP sources so long as the access network is much slower than the backbone, which is true today and likely to remain true in the future. We conclude that buffers can be made small enough for all-optical routers with small integrated optical buffers.) <|cite_end|> and to consider the performance of TCP congestion control with many
connections under the assumption of small, medium and large buffer sizes. Several authors
have pointed out that the value $n$ can be difficult to determine for realistic traffic
patterns, which not only include a mix of connections sizes and RTTs, but can also be
strongly time-varying <|cite_start|> (Reference: Open Issues in Router Buffer Sizing: Recent research results suggest that the buffers of router interfaces can be made very small, much less than the link's and width-delay product, without causing a utilization loss, as long as the link carries many TCP flows. In this letter we raise some concerns about the previous recommendation. We show that the use of such small buffers can lead to excessively high loss rates (up to 5%-15%in our simulations) in congested access links that carry many flows. Even if the link is fully utilized, small buffers lead to lower throughput for most large TCP flows, and significant variability in the per-flow throughput and transfer latency. We also discuss some important issues in router buffer sizing that are often ignored) <|cite_end|>, <|cite_start|> (Reference: A Critique of Recently Proposed Buffer-Sizing Strategies: Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queueing delays. Recently, several papers have suggested that it may, under general circumstances, be possible to achieve high utilisation with small network buffers. In this paper we review these recommendations. A number of issues are reported that question the utility of these recommendations.) <|cite_end|>. In <|cite_start|> (Reference: A Critique of Recently Proposed Buffer-Sizing Strategies: Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queueing delays. Recently, several papers have suggested that it may, under general circumstances, be possible to achieve high utilisation with small network buffers. In this paper we review these recommendations. A number of issues are reported that question the utility of these recommendations.) <|cite_end|>, it is observed from measurements on a production link that
traffic patterns vary significantly over time, and may contain a complex mix of flow
connection lengths and RTTs. It is demonstrated in <|cite_start|> (Reference: Open Issues in Router Buffer Sizing: Recent research results suggest that the buffers of router interfaces can be made very small, much less than the link's and width-delay product, without causing a utilization loss, as long as the link carries many TCP flows. In this letter we raise some concerns about the previous recommendation. We show that the use of such small buffers can lead to excessively high loss rates (up to 5%-15%in our simulations) in congested access links that carry many flows. Even if the link is fully utilized, small buffers lead to lower throughput for most large TCP flows, and significant variability in the per-flow throughput and transfer latency. We also discuss some important issues in router buffer sizing that are often ignored) <|cite_end|> <|cite_start|> (Reference: A Critique of Recently Proposed Buffer-Sizing Strategies: Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queueing delays. Recently, several papers have suggested that it may, under general circumstances, be possible to achieve high utilisation with small network buffers. In this paper we review these recommendations. A number of issues are reported that question the utility of these recommendations.) <|cite_end|> that the use of very small buffers
can lead to an excessive loss rate. Motivated by these observations, in <|cite_start|> (Reference: Adaptive Tuning of Drop-Tail Buffers for Reducing Queueing Delays: Internet router buffers are used to accommodate packets that arrive in bursts and to maintain high utilization of the egress link. Such buffers can lead to large queuing delays. We propose a simple algorithm, active drop-tail (ADT), which regulates the queue size, based on prevailing traffic conditions, to a minimum size that still allows for a desired (high) level of utilization. Packet level ns2 simulations are provided to show that adaptive drop-tail achieves significantly smaller queues than current approaches at the expense of 1-2% of the link utilization.) <|cite_end|> <|cite_start|> (Reference: Sizing Internet Router Buffers, Active Queue Management, and the Lur'e Problem: In this paper we consider the design of control strategies for implementation in a recently proposed active queue management (AQM) scheme, active drop-tail (ADT) (Stanojevic et al., 2006), The basic idea underlying ADT is to adjust the queue length of a drop-tail buffer to regulate the utilization of a link carrying Internet traffic in order to reduce queuing delays in the network. A basic problem in the design of ADT is to design appropriate strategies to regulate the target utilization using the buffer queue as a control input. This problem is challenging due to the stochastic and time-varying nature of communication networks. Our contribution in this paper is to relate the design of control strategies for this AQM to the classical Lur'e problem. Our formulation naturally accounts for the time-variations and randomness inherent in communication networks, and enables us to design AQMs with guaranteed convergence (under mild and realistic assumptions). Packet level simulations are given to demonstrate the efficacy of our design methodology) <|cite_end|> a measurement-based adaptive buffer size
tuning method is proposed. However, this approach is not applicable to WLANs since it
requires a priori knowledge of the link capacity or line rate, which in WLANs is
time-varying and load dependent. <|cite_start|> (Reference: ABS: Adaptive Buffer Sizing for Heterogeneous Networks: ) <|cite_end|> introduces another adaptive buffer sizing algorithm based on control theory for Internet core routers. <|cite_start|> (Reference: Impact of File Arrivals and Departures on Buffer Sizing in Core Routers: Traditionally, it had been assumed that the efficiency requirements of TCP dictate that the buffer size at the router must be of the order of the bandwidth-delay (C × RTT) product. Recently, this assumption was questioned in a number of papers, and the rule was shown to be conservative for certain traffic models. In particular, by appealing to statistical multiplexing, it was shown that on a router with N long-lived connections, buffers of size O([(C × RTT)/(√N)]) or even O(1) are sufficient. In this paper, we reexamine the buffer-size requirements of core routers when flows arrive and depart. Our conclusion is as follows: If the core-to-access-speed ratio is large, then O(1) buffers are sufficient at the core routers; otherwise, larger buffer sizes do improve the flow-level performance of the users. From a modeling point of view, our analysis offers two new insights. First, it may not be appropriate to derive buffer-sizing rules by studying a network with a fixed number of users. In fact, depending upon the core-to-access-speed ratio, the buffer size itself may affect the number of flows in the system, so these two parameters (buffer size and number of flows in the system) should not be treated as independent quantities. Second, in the regime where the core-to-access-speed ratio is large, we note that the O(1) buffer sizes are sufficient for good performance and that no loss of utilization results, as previously believed.) <|cite_end|> consider the role of the output/input capacity ratio at a network link in determining the required buffer size. <|cite_start|> (Reference: Experimental Study of Router Buffer Sizing: During the past four years, several papers have proposed rules for sizing buffers in Internet core routers. Appenzeller et al. suggest that a link needs a buffer of size O(C/√N), where C is the capacity of the link, and N is the number of flows sharing the link. If correct, buffers could be reduced by 99% in a typical backbone router today without loss in throughput. Enachecsu et al., and Raina et al. suggest that buffers can be reduced even further to 20-50 packets if we are willing to sacrifice a fraction of link capacities, and if there is a large ratio between the speed of core and access links. If correct, this is a five orders of magnitude reduction in buffer sizes. Each proposal is based on theoretical analysis and validated using simulations. Given the potential benefits (and the risk of getting it wrong!) it is worth asking if these results hold in real operational networks. In this paper, we report buffer-sizing experiments performed on real networks - either laboratory networks with commercial routers as well as customized switching and monitoring equipment (UW Madison, Sprint ATL, and University of Toronto), or operational backbone networks (Level 3 Communications backbone network, Internet2, and Stanford). The good news: Subject to the limited scenarios we can create, the buffer sizing results appear to hold. While we are confident that the O(C/√N) will hold quite generally for backbone routers, the 20-50 packet rule should be applied with extra caution to ensure that network components satisfy the underlying assumptions.) <|cite_end|> experimentally investigates the analytic results reported in <|cite_start|> (Reference: Sizing Router Buffers: All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM.) <|cite_end|>, <|cite_start|> (Reference: Buffer sizes for large multiplexers: TCP queueing theory and instability analysis: In large multiplexers with many TCP flows, the aggregate traffic flow behaves predictably; this is a basis for the fluid model of Misra, Gong and Towsley V. Misra et al., (2000) and for a growing literature on fluid models of congestion control. In this paper we argue that different fluid models arise from different buffer-sizing regimes. We consider the large buffer regime (buffer size is bandwidth-delay product), an intermediate regime (divide the large buffer size by the square root of the number of flows), and the small buffer regime (buffer size does not depend on number of flows). Our arguments use various techniques from queueing theory. We study the behaviour of these fluid models (on a single bottleneck Kink, for a collection of identical long-lived flows). For what parameter regimes is the fluid model stable, and when it is unstable what is the size of oscillations and the impact on goodput? Our analysis uses an extension of the Poincare-Linstedt method to delay-differential equations. We find that large buffers with drop-tail have much the same performance as intermediate buffers with either drop-tail or AQM; that large buffers with RED are better at least for window sizes less than 20 packets; and that small buffers with either drop-tail or AQM are best over a wide range of window sizes, though the buffer size must be chosen carefully. This suggests that buffer sizes should be much much smaller than is currently recommended.) <|cite_end|>, <|cite_start|> (Reference: Routers with Very Small Buffers: Internet routers require buffers to hold pack- ets during times of congestion. The buffers need to be fast, and so ideally they should be small enough to use fast memory technologies such as SRAM or all-optical buffering. Unfortunately, a widely used rule-of-thumb says we need a bandwidth-delay product of buffering at each router so as not to lose link utilization. This can be prohibitively large. In a recent paper, Appenzeller et al. challenged this rule-of-thumb and showed that for a backbone network, the buffer size can be divided by √ N without sacrificing throughput, where N is the number of flows sharing the bottleneck. In this paper, we explore how buffers in the backbone can be significantly reduced even more, to as little as a few dozen packets, if we are willing to sacrifice a small amount of link capacity. We argue that if the TCP sources are not overly bursty, then fewer than twenty packet buffers are sufficient for high throughput. Specifically, we argue that O(log W ) buffers are sufficient, where W is the window size of each flow. We support our claim with analysis and a variety of simulations. The change we need to make to TCP is minimal—each sender just needs to pace packet injections from its window. Moreover, there is some evidence that such small buffers are sufficient even if we don't modify the TCP sources so long as the access network is much slower than the backbone, which is true today and likely to remain true in the future. We conclude that buffers can be made small enough for all-optical routers with small integrated optical buffers.) <|cite_end|> and. <|cite_start|> (Reference: Achieving 100\% Throughput in TCP/AQM Under Aggressive Packet Marking With Small Buffer: We consider a TCP/AQM system with large link capacity (NC) shared by many flows. The traditional rule-of-thumb suggests that the buffer size be chosen in proportion to the number of flows (N) for full link utilization, while recent research outcomes show that O(radic(N)) buffer sizing is sufficient for high utilization and O(1) buffer sizing makes the system stable at the cost of reduced link utilization. In this paper, we consider a system where the active queue management (AQM) is scaled as O(Nalpha) with a buffer of size O(Nbeta) (0 < alpha < beta < 0.5). By capturing randomness both in packet arrivals and in packet markings, we develop a doubly-stochastic model for a TCP/AQM system with many flows. We prove that, under such a scale, the system always performs well in the sense that the link utilization goes to 100% and the loss ratio decreases to zero as the system size JV increases. Our results assert that the system enjoys benefit of largeness with no tradeoff between full link utilization, zero packet loss, and small buffer size, at least asymptotically. This is in stark contrast to existing results showing that there always exists a tradeoff between full link utilization and the required buffer size. Extensive ns-2 simulation results under various configurations also confirm our theoretical findings. Our study illustrates that blind application of fluid modeling may result in strange results and exemplifies the importance of choosing a right modeling approach for different scaling regimes.) <|cite_end|> considers sizing buffers managed with active queues management techniques.
The foregoing work is in the context of wired links, and to our knowledge the question of
buffer sizing for 802.11 wireless links has received almost no attention in the
literature. Exceptions include <|cite_start|> (Reference: On Buffer Sizing for Voice in 802.11 WLANs: The use of 802.11 to transport delay sensitive traffic is becoming increasingly common. This raises the question of the tradeoff between buffering delay and loss in 802.11 networks. We find that there exists a sharp transition from the low-loss, low-delay regime to high-loss, high-delay operation. Given modest buffering at the access point, this transition determines the voice capacity of a WLAN and its location is largely insensitive to the buffer size used.) <|cite_end|> <|cite_start|> (Reference: Understanding TCP fairness over Wireless LAN: As the use of mobile devices increase with lowering cost, miniaturization and proliferation of internet and coming of internet access technologies and their commercialization we are going to see and increasing amount of internet traffic coming and going through wireless networks. Therefore the study of the effects of peculiarities of wireless network on fairness of bandwidth allocation is gaining momentum and has some important contributions towards imp rovement of such services on mobile devices In this report we tackle the issue of bandwidth sharing in a Wireless LAN situation with many upstream and downstream flows of TCP data traffic. We analyze the present implementation of medium access standard 802.11, and identify causes and effects of unfairness in its working while dealing with TCP flow. Then we discuss some proposed solutions to imp rove upon or rectify or alleviate from the situation and see their effectiveness. Lastly we view the entire results through numerous simulations and test results.) <|cite_end|> <|cite_start|> (Reference: Impact of 802.11e EDCA on mixed TCP-based applications: There has been an explosive growth in the use of wireless LANs (WLANs) to support network applications ranging from web-browsing and file-sharing to voice calls. It is difficult to optimally configure WLAN components, such as access points (APs), to meet the quality-of-service requirements of the different applications, as well as ensuring flow-level fairness. Recent work has shown that the widely-deployed IEEE 802.11 MAC Distributed Coordination Function (DCF) is biased against downstream flows. The new IEEE 802.11e standard introduces QoS mechanisms, such as Enhanced Distributed Channel Access (EDCA), that allow this unfairness to be addressed. So far, only limited work has been done to evaluate the impact of these MAC protocols on TCP-based applications. In this paper, through ns-2 simulations, we evaluate the impact of EDCA on TCP application traffic consisting of both long and short-lived TCP flows. We find that the performance of TCP applications is very dependent upon the settings of the EDCA parameters and buffer lengths at the AP. We also show that the performance of the admission control strategy employed depends on the buffer lengths at the AP and the traffic intensity.) <|cite_end|>. Sizing of buffers for voice
traffic in WLANs is investigated in <|cite_start|> (Reference: On Buffer Sizing for Voice in 802.11 WLANs: The use of 802.11 to transport delay sensitive traffic is becoming increasingly common. This raises the question of the tradeoff between buffering delay and loss in 802.11 networks. We find that there exists a sharp transition from the low-loss, low-delay regime to high-loss, high-delay operation. Given modest buffering at the access point, this transition determines the voice capacity of a WLAN and its location is largely insensitive to the buffer size used.) <|cite_end|>. The impact of fixed
buffer sizes on TCP flows is studied in <|cite_start|> (Reference: Understanding TCP fairness over Wireless LAN: As the use of mobile devices increase with lowering cost, miniaturization and proliferation of internet and coming of internet access technologies and their commercialization we are going to see and increasing amount of internet traffic coming and going through wireless networks. Therefore the study of the effects of peculiarities of wireless network on fairness of bandwidth allocation is gaining momentum and has some important contributions towards imp rovement of such services on mobile devices In this report we tackle the issue of bandwidth sharing in a Wireless LAN situation with many upstream and downstream flows of TCP data traffic. We analyze the present implementation of medium access standard 802.11, and identify causes and effects of unfairness in its working while dealing with TCP flow. Then we discuss some proposed solutions to imp rove upon or rectify or alleviate from the situation and see their effectiveness. Lastly we view the entire results through numerous simulations and test results.) <|cite_end|>. In <|cite_start|> (Reference: Impact of 802.11e EDCA on mixed TCP-based applications: There has been an explosive growth in the use of wireless LANs (WLANs) to support network applications ranging from web-browsing and file-sharing to voice calls. It is difficult to optimally configure WLAN components, such as access points (APs), to meet the quality-of-service requirements of the different applications, as well as ensuring flow-level fairness. Recent work has shown that the widely-deployed IEEE 802.11 MAC Distributed Coordination Function (DCF) is biased against downstream flows. The new IEEE 802.11e standard introduces QoS mechanisms, such as Enhanced Distributed Channel Access (EDCA), that allow this unfairness to be addressed. So far, only limited work has been done to evaluate the impact of these MAC protocols on TCP-based applications. In this paper, through ns-2 simulations, we evaluate the impact of EDCA on TCP application traffic consisting of both long and short-lived TCP flows. We find that the performance of TCP applications is very dependent upon the settings of the EDCA parameters and buffer lengths at the AP. We also show that the performance of the admission control strategy employed depends on the buffer lengths at the AP and the traffic intensity.) <|cite_end|>, TCP performance with a variety of AP buffer sizes and 802.11e
parameter settings is investigated. In <|cite_start|> (Reference: Buffer Sizing for TCP Flows in 802.11e WLANs: We consider the task of sizing buffers for TCP flows in 802.11e WLANs. A number of fundamental new issues arise compared to wired networks. These include that the mean service rate is dependent on the level of channel contention and packet inter-service times vary stochastically due to the random nature of CSMA/CA operation. We find that these considerations lead naturally to a requirement for adaptation of buffer sizes in response to changing network conditions.) <|cite_end|> <|cite_start|> (Reference: Adaptive Buffer Sizing for TCP Flows in 802.11e WLANs: We consider the provision of access point buffers in WLANs. We first demonstrate that the default use of static buffers in WLANs leads to either undesirable channel under-utilisation or unnecessary high delays, which motivates the use of dynamic buffer sizing. Although adaptive algorithms have been proposed for wired Internet, a number of fundamental new issues arise in WLANs which necessitates new algorithms to be designed. These new issues include the fact that channel bandwidth is time-varying, the mean service rate is dependent on the level of channel contention, and packet inter-service times vary stochastically due to the random nature of CSMA/CA operation. We propose an adaptive sizing algorithms which is demonstrated to be able to maintain high throughput efficiency whilst achieving low delay.) <|cite_end|>, initial investigations are reported related to the eBDP
algorithm and the ALT algorithm of the A* algorithm. We substantially extend the previous
work in this paper with theoretical analysis, experiment implementations in both testbed
and a production WLAN, and additional NS simulations. <|paper_end|> | [
"<|reference_start|> Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: <|reference_end|>",
"<|reference_start|> Open Issues in Router Buffer Sizing: Recent research results suggest that the buffers of router interfaces can be made very small, much less than the link's and width-delay product, without causing a utilization loss, as long as the link carries many TCP flows. In this letter we raise some concerns about the previous recommendation. We show that the use of such small buffers can lead to excessively high loss rates (up to 5%-15%in our simulations) in congested access links that carry many flows. Even if the link is fully utilized, small buffers lead to lower throughput for most large TCP flows, and significant variability in the per-flow throughput and transfer latency. We also discuss some important issues in router buffer sizing that are often ignored <|reference_end|>",
"<|reference_start|> Sizing Router Buffers: All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = overlineRTT x C, where overlineRTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms x 10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = (overlineRTT x C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with n flows requires no more than B = (overlineRTT x C) √n, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM. <|reference_end|>",
"<|reference_start|> Buffer Sizing for TCP Flows in 802.11e WLANs: We consider the task of sizing buffers for TCP flows in 802.11e WLANs. A number of fundamental new issues arise compared to wired networks. These include that the mean service rate is dependent on the level of channel contention and packet inter-service times vary stochastically due to the random nature of CSMA/CA operation. We find that these considerations lead naturally to a requirement for adaptation of buffer sizes in response to changing network conditions. <|reference_end|>"
] | [
7,
18,
25,
35
] | {"<|cite_1|>": "ss-1031132", "<|cite_2|>": "ss-1535025", "<|cite_3|>": "ss-1031133", "<|cite_4|>": "ss-1031134", "<|cite_5|>": "ss-1535042", "<|cite_6|>": "ss-1031132", "<|cite_7|>": "ss-1031135", "<|cite_8|>": "ss-842913", "<|cite_9|>": "ss-1031136", "<|cite_10|>": "ss-1031137", "<|cite_11|>": "ss-1031132", "<|cite_12|>": "ss-1535025", "<|cite_13|>": "ss-1535025", "<|cite_14|>": "ss-917568", "<|cite_15|>": "ss-1031138", "<|cite_17|>": "ss-1535042", "<|cite_18|>": "ss-1031134", "<|cite_19|>": "ss-1031134", "<|cite_20|>": "ss-1535042", "<|cite_21|>": "ss-1031134", "<|cite_22|>": "ss-1031133", "<|cite_23|>": "ss-1031139", "<|cite_24|>": "ss-1031140", "<|multi_cite_25_2|>": "ss-1031141", "<|cite_26|>": "ss-1031142", "<|cite_27|>": "ss-1535025", "<|cite_28|>": "ss-917568", "<|cite_29|>": "ss-1031138", "<|cite_31|>": "ss-1031143", "<|cite_32|>": "ss-1031135", "<|cite_33|>": "ss-1031136", "<|cite_34|>": "ss-1031137", "<|cite_35|>": "ss-1031135", "<|cite_36|>": "ss-1031136", "<|cite_37|>": "ss-1031137", "<|cite_38|>": "ss-1031144", "<|cite_39|>": "ss-1031145"} |
2208.09870 | <|paper_start|> Title: Objects Can Move: 3D Change Detection by Geometric Transformation Constistency
Abstract: Objects Can Move: 3D Change Detection by Geometric Transformation Constistency: AR/VR applications and robots need to know when the scene has changed. An example is when objects are moved, added, or removed from the scene. We propose a 3D object discovery method that is based only on scene changes. Our method does not need to encode any assumptions about what is an object, but rather discovers objects by exploiting their coherent move. Changes are initially detected as differences in the depth maps and segmented as objects if they undergo rigid motions. A graph cut optimization propagates the changing labels to geometrically consistent regions. Experiments show that our method achieves state-of-the-art performance on the 3RScan dataset against competitive baselines. The source code of our method can be found at https://github.com/katadam/ObjectsCanMove.
Introduction
The ability to detect and interact with objects is critical to AR/VR applications and for multiple robotics tasks, such as surveillance, robotic manipulation, and maintaining order. All these tasks are operated in the same setting. Thus, the robot, or the AR/VR device stores a reference map and builds a new map upon each revisit. However, in-between the revisits, certain objects may have changed. Checking for scene consistency and detecting changes on an object-level can thus lead to 3D object discovery, without the need of labeled data.
Motivated by the above, we explore an object discovery approach, based on examining scene consistency on an object-level and without using annotated data. We are aiming at discovering entities (objects) that have changed when revisiting a place. We show that it is possible to detect 3D objects purely geometrically, without a predefined notion of objects. The underlying idea is that objects, unlike the static background of a scene, can be moved. This is an intuitive definition of ``objectness'' that does not need any annonated data.
Segmenting dynamic objects in temporal observations is a long-standing challenge. There are two ways to apply this idea: (1) segment objects from the background by actively observing their motion, e.g., by reconstructing dynamic objects during SLAM <|cite_start|> (Reference: {RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects: Although surface reconstruction from depth data has made significant advances in the recent years, handling changing environments remains a major challenge. This is unsatisfactory, as humans regularly move objects in their environments. Existing solutions focus on a restricted set of objects (e.g., those detected by semantic classifiers) possibly with template meshes, assume static camera, or mark objects touched by humans as moving. We remove these assumptions by introducing RigidFusion. Our core idea is a novel asynchronous moving‐object detection method, combined with a modified volumetric fusion. This is achieved by a model‐to‐frame TSDF decomposition leveraging free‐space carving of tracked depth values of the current frame with respect to the background model during run‐time. As output, we produce separate volumetric reconstructions for the background and each moving object in the scene, along with its trajectory over time. Our method does not rely on the object priors (e.g., semantic labels or pre‐scanned meshes) and is insensitive to the motion residuals between objects and the camera. In comparison to state‐of‐the‐art methods (e.g., Co‐Fusion, MaskFusion), we handle significantly more challenging reconstruction scenarios involving moving camera and improve moving‐object detection (26% on the miss‐detection ratio), tracking (27% on MOTA), and reconstruction (3% on the reconstruction F1) on the synthetic dataset. Please refer the supplementary and the project website for the video demonstration (geometry.cs.ucl.ac.uk/projects/2021/rigidfusion).) <|cite_end|>, or (2) revisit the same scene after a (longer) period and detect potential objects as changes between two maps <|cite_start|> (Reference: Rescan: Inductive Instance Segmentation for Indoor RGBD Scans: In depth-sensing applications ranging from home robotics to AR/VR, it will be common to acquire 3D scans of interior spaces repeatedly at sparse time intervals (e.g., as part of regular daily use). We propose an algorithm that analyzes these "rescans" to infer a temporal model of a scene with semantic instance information. Our algorithm operates inductively by using the temporal model resulting from past observations to infer an instance segmentation of a new scan, which is then used to update the temporal model. The model contains object instance associations across time and thus can be used to track individual objects, even though there are only sparse observations. During experiments with a new benchmark for the new task, our algorithm outperforms alternate approaches based on state-of-the-art networks for semantic instance segmentation.) <|cite_end|>. We follow the latter approach, i.e., we model the problem as a change detection task.
Detecting potential scene changes based on direct data analytics is a task attracting much attention since affordable 3D scanning technology <|cite_start|> (Reference: Real-time rgb-d camera relocalization: We introduce an efficient camera relocalization approach which can be easily integrated into real-time 3D reconstruction methods, such as KinectFusion. Our approach makes use of compact encoding of whole image frames which enables both online harvesting of keyframes in tracking mode, and fast retrieval of pose proposals when tracking is lost. The encoding scheme is based on randomized ferns and simple binary feature tests. Each fern generates a small block code, and the concatenation of codes yields a compact representation of each camera frame. Based on those representations we introduce an efficient frame dissimilarity measure which is defined via the block-wise hamming distance (BlockHD). We illustrate how BlockHDs between a query frame and a large set of keyframes can be simultaneously evaluated by traversing the nodes of the ferns and counting image co-occurrences in corresponding code tables. In tracking mode, this mechanism allows us to consider every frame/pose pair as a potential keyframe. A new keyframe is added only if it is sufficiently dissimilar from all previously stored keyframes. For tracking recovery, camera poses are retrieved that correspond to the keyframes with smallest BlockHDs. The pose proposals are then used to reinitialize the tracking algorithm. Harvesting of keyframes and pose retrieval are computationally efficient with only small impact on the run-time performance of the 3D reconstruction. Integrating our relocalization method into KinectFusion allows seamless continuation of mapping even when tracking is frequently lost. Additionally, we demonstrate how marker-free augmented reality, in particular, can benefit from this integration by enabling a smoother and continuous AR experience.) <|cite_end|> <|cite_start|> (Reference: RIO: 3D Object Instance Re-Localization in Changing Indoor Environments: In this work, we introduce the task of 3D object instance re-localization (RIO): given one or multiple objects in an RGB-D scan, we want to estimate their corresponding 6DoF poses in another 3D scan of the same environment taken at a later point in time. We consider RIO a particularly important task in 3D vision since it enables a wide range of practical applications, including AI-assistants or robots that are asked to find a specific object in a 3D scene. To address this problem, we first introduce 3RScan, a novel dataset and benchmark, which features 1482 RGB-D scans of 478 environments across multiple time steps. Each scene includes several objects whose positions change over time, together with ground truth annotations of object instances and their respective 6DoF mappings among re-scans. Automatically finding 6DoF object poses leads to a particular challenging feature matching task due to varying partial observations and changes in the surrounding context. To this end, we introduce a new data-driven approach that efficiently finds matching features using a fully-convolutional 3D correspondence network operating on multiple spatial scales. Combined with a 6DoF pose optimization, our method outperforms state-of-the-art baselines on our newly-established benchmark, achieving an accuracy of 30.58%.) <|cite_end|> <|cite_start|> (Reference: 3D Semantic Parsing of Large-scale Indoor Spaces: In this paper, we propose a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach: first, the raw data is parsed into semantically meaningful spaces (e.g. rooms, etc) that are aligned into a canonical reference coordinate system. Second, the spaces are parsed into their structural and building elements (e.g. walls, columns, etc). Performing these with a strong notation of global 3D space is the backbone of our method. The alignment in the first step injects strong 3D priors from the canonical coordinate system into the second step for discovering elements. This allows diverse challenging scenarios as man-made indoor spaces often show recurrent geometric patterns while the appearance features can change drastically. We also argue that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used. We evaluated our method on a new dataset of several buildings with a covered area of over 6, 000m2 and over 215 million points, demonstrating robust results readily useful for practical applications.) <|cite_end|> makes such data widely available. However, a straightforward approach to detecting changes between two scans based on voxel occupancy or inconsistency maps <|cite_start|> (Reference: Fast image-based geometric change detection given a 3d model: 3D models of the environment are used in numerous robotic applications and should reflect the current state of the world. In this paper, we address the problem of quickly finding structural changes between the current state of the world and a given 3D model using a small number of images. Our approach finds inconsistencies between pairs of images by re-projecting an image onto another one by passing through the given 3D model. This process leads to ambiguities, which we resolve by combining multiple images such that the 3D location of the change can be estimated. A focus of our approach is that it can be executed fast enough to allow the operation on a mobile system. We implemented our approach in C++ and released it as open source software. We tested it on existing datasets as well as on self-recorded image sequences and 3D models, which we publicly share. Our experiments show that our method quickly finds changes in the geometry of a scene.) <|cite_end|> would often miss changes, e.g., when an object rotates around an axis passing through the object or when it is ``slid along itself''. An alternative approach employs the comparison of visual features and relies on photoconsistency constraints <|cite_start|> (Reference: Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.) <|cite_end|>. Yet, this approach does not perform well in our setting since there can be significant illumination changes between the two maps.
\begin{figure*}[b!]
\centering
\includegraphics[width=\textwidth,trim={0cm 0cm 0cm 1cm}]{overall_method_eka.pdf}
\caption{ Workflow of the proposed method: given two scans recording changes and the associated camera poses, we discover all objects that have been added, moved, or removed from the scene. Initial geometric changes are detected as differences in depth maps (Step 1). The dominant transformations are then computed (Step 2). The initial set of detections is incomplete and thus refined, using a graph cut-based optimization on a supervoxel representation, propagating change to all regions undergoing the same transformation (Step 3). Discovered objects are presented as the extracted connected components of the refined detections}
\label{fig:concept}
\centering
\end{figure*}
To tackle the aforementioned shortcomings, we introduce a novel change detection framework, depicted in Figure \ref{fig:concept}, that uses geometric transformation consistency towards object discovery (i.e., change detection on an object-level). 3D objects are thus discovered without the need to encode what an object is.
We consider a scenario where we have two 3D maps, i.e., a reference scan (recorded at time $t_{0}$) and a rescan (recorded at time $t_{1}$), of a scene, as well as the associated camera poses. Initial change detections are computed as differences in the depth maps. As shown in Figures~\ref{fig:concept} and~\ref{fig:overall}, the initial detected points mainly delineate the boundaries of the moved objects. To recover all parts, we propagate changes from regions where we can detect them to parts where no changes were seen, but which belong to the same object. Our local robust feature matching between parts of the two scans generates motion hypotheses for the scene parts, induced by the moved objects. These motions can measure consistency as scene parts that undergo the same rigid transformation.
\PAR{Contributions.} We introduce a novel 3D change detection framework via geometric transformation consistency. As change detection is performed on an object-level, this novel framework serves as an object discovery method in 3D scenes, without needing any strong priors or definition of what objects are. We showcase that even though we target rigid objects/changes, our method can also handle non-rigid changes, as shown in Figure \ref{curtain}. The proposed method achieves state-of-the-art performance on the 3RScan dataset <|cite_start|> (Reference: RIO: 3D Object Instance Re-Localization in Changing Indoor Environments: In this work, we introduce the task of 3D object instance re-localization (RIO): given one or multiple objects in an RGB-D scan, we want to estimate their corresponding 6DoF poses in another 3D scan of the same environment taken at a later point in time. We consider RIO a particularly important task in 3D vision since it enables a wide range of practical applications, including AI-assistants or robots that are asked to find a specific object in a 3D scene. To address this problem, we first introduce 3RScan, a novel dataset and benchmark, which features 1482 RGB-D scans of 478 environments across multiple time steps. Each scene includes several objects whose positions change over time, together with ground truth annotations of object instances and their respective 6DoF mappings among re-scans. Automatically finding 6DoF object poses leads to a particular challenging feature matching task due to varying partial observations and changes in the surrounding context. To this end, we introduce a new data-driven approach that efficiently finds matching features using a fully-convolutional 3D correspondence network operating on multiple spatial scales. Combined with a 6DoF pose optimization, our method outperforms state-of-the-art baselines on our newly-established benchmark, achieving an accuracy of 30.58%.) <|cite_end|>, against competitive baselines.
We evaluate our framework on the 3RScan dataset <|cite_start|> (Reference: RIO: 3D Object Instance Re-Localization in Changing Indoor Environments: In this work, we introduce the task of 3D object instance re-localization (RIO): given one or multiple objects in an RGB-D scan, we want to estimate their corresponding 6DoF poses in another 3D scan of the same environment taken at a later point in time. We consider RIO a particularly important task in 3D vision since it enables a wide range of practical applications, including AI-assistants or robots that are asked to find a specific object in a 3D scene. To address this problem, we first introduce 3RScan, a novel dataset and benchmark, which features 1482 RGB-D scans of 478 environments across multiple time steps. Each scene includes several objects whose positions change over time, together with ground truth annotations of object instances and their respective 6DoF mappings among re-scans. Automatically finding 6DoF object poses leads to a particular challenging feature matching task due to varying partial observations and changes in the surrounding context. To this end, we introduce a new data-driven approach that efficiently finds matching features using a fully-convolutional 3D correspondence network operating on multiple spatial scales. Combined with a 6DoF pose optimization, our method outperforms state-of-the-art baselines on our newly-established benchmark, achieving an accuracy of 30.58%.) <|cite_end|>, initially designed for benchmarking object instance relocalization. Our evaluation shows the potential of the dataset to assess 3D change detection. We provide code to generate the ground truth annotations.
Related Work
\label{related}
{\PAR{Change Detection.}} 3D Change detection is directly related to our method since the presented workflow is modeled in this concept.
Change detection has been traditionally treated mostly by geometric approaches <|cite_start|> (Reference: City-scale change detection in cadastral 3d models using images: In this paper, we propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. We designed our approach to account for all the challenges involved in a large scale application of change detection, such as, inaccuracies in the input geometry, errors in the geo-location data of the images, as well as, the limited amount of information due to sparse imagery. We evaluated our approach on an area of 6 square kilometers inside a city, using 3420 images downloaded from Google Street View. These images besides being publicly available, are also a good example of panoramic images captured with a driving vehicle, and hence demonstrating all the possible challenges resulting from such an acquisition. We also quantitatively compared the performance of our approach with respect to a ground truth, as well as to prior work. This evaluation shows that our approach outperforms the current state of the art.) <|cite_end|> <|cite_start|> (Reference: Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.) <|cite_end|> <|cite_start|> (Reference: Change detection in 3d models based on camera images: 3D models of the environment are used in numerous robotic applications and should reflect the current state of the world. In this paper, we address the problem of quickly finding structural changes between the current state of the world and a given 3D model using a small number of images. Our approach finds inconsistencies between pairs of images by reprojecting an image onto the other by passing through the 3D model. Ambiguities about possible inconsistencies resulting from this process are resolved by combining multiple images such that the 3D location of the change can be estimated. A focus of our approach is that it can be executed fast enough to allow the operation on a mobile system. We implemented our approach in C++ and tested it on an existing dataset for change detection as well as on self recorded images sequences. Our experiments suggest that our method quickly finds changes in the geometry of a scene.) <|cite_end|> <|cite_start|> (Reference: Fast image-based geometric change detection given a 3d model: 3D models of the environment are used in numerous robotic applications and should reflect the current state of the world. In this paper, we address the problem of quickly finding structural changes between the current state of the world and a given 3D model using a small number of images. Our approach finds inconsistencies between pairs of images by re-projecting an image onto another one by passing through the given 3D model. This process leads to ambiguities, which we resolve by combining multiple images such that the 3D location of the change can be estimated. A focus of our approach is that it can be executed fast enough to allow the operation on a mobile system. We implemented our approach in C++ and released it as open source software. We tested it on existing datasets as well as on self-recorded image sequences and 3D models, which we publicly share. Our experiments show that our method quickly finds changes in the geometry of a scene.) <|cite_end|> <|cite_start|> (Reference: Change detection in 3D point clouds acquired by a mobile mapping system: Abstract. Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster–Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions.) <|cite_end|> <|cite_start|> (Reference: Image-Based 4-d Reconstruction Using 3-d Change Detection: ) <|cite_end|>. Similar to our initial detection step, <|cite_start|> (Reference: Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.) <|cite_end|> <|cite_start|> (Reference: City-scale change detection in cadastral 3d models using images: In this paper, we propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. We designed our approach to account for all the challenges involved in a large scale application of change detection, such as, inaccuracies in the input geometry, errors in the geo-location data of the images, as well as, the limited amount of information due to sparse imagery. We evaluated our approach on an area of 6 square kilometers inside a city, using 3420 images downloaded from Google Street View. These images besides being publicly available, are also a good example of panoramic images captured with a driving vehicle, and hence demonstrating all the possible challenges resulting from such an acquisition. We also quantitatively compared the performance of our approach with respect to a ground truth, as well as to prior work. This evaluation shows that our approach outperforms the current state of the art.) <|cite_end|> <|cite_start|> (Reference: Change detection in 3d models based on camera images: 3D models of the environment are used in numerous robotic applications and should reflect the current state of the world. In this paper, we address the problem of quickly finding structural changes between the current state of the world and a given 3D model using a small number of images. Our approach finds inconsistencies between pairs of images by reprojecting an image onto the other by passing through the 3D model. Ambiguities about possible inconsistencies resulting from this process are resolved by combining multiple images such that the 3D location of the change can be estimated. A focus of our approach is that it can be executed fast enough to allow the operation on a mobile system. We implemented our approach in C++ and tested it on an existing dataset for change detection as well as on self recorded images sequences. Our experiments suggest that our method quickly finds changes in the geometry of a scene.) <|cite_end|> detect changes based on inconsistency maps from RGB or depth projections.
Many change detection algorithms <|cite_start|> (Reference: Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.) <|cite_end|> <|cite_start|> (Reference: Detection of geometric temporal changes in point clouds: Detecting geometric changes between two 3D captures of the same location performed at different moments is a critical operation for all systems requiring a precise segmentation between change and no‐change regions. Such application scenarios include 3D surface reconstruction, environment monitoring, natural events management and forensic science. Unfortunately, typical 3D scanning setups cannot provide any one‐to‐one mapping between measured samples in static regions: in particular, both extrinsic and intrinsic sensor parameters may vary over time while sensor noise and outliers additionally corrupt the data. In this paper, we adopt a multi‐scale approach to robustly tackle these issues. Starting from two point clouds, we first remove outliers using a probabilistic operator. Then, we detect the actual change using the implicit surface defined by the point clouds under a Growing Least Square reconstruction that, compared to the classical proximity measure, offers a more robust change/no‐change characterization near the temporal intersection of the scans and in the areas exhibiting different sampling density and direction. The resulting classification is enhanced with a spatial reasoning step to solve critical geometric configurations that are common in man‐made environments. We validate our approach on a synthetic test case and on a collection of real data sets acquired using commodity hardware. Finally, we show how 3D reconstruction benefits from the resulting precise change/no‐change segmentation.) <|cite_end|> are based on the concept of initial change detection (e.g., though color consistency, comparing depth values, etc.), followed by propagating these detections to identify all regions that have changed. <|cite_start|> (Reference: Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.) <|cite_end|> <|cite_start|> (Reference: Detection of geometric temporal changes in point clouds: Detecting geometric changes between two 3D captures of the same location performed at different moments is a critical operation for all systems requiring a precise segmentation between change and no‐change regions. Such application scenarios include 3D surface reconstruction, environment monitoring, natural events management and forensic science. Unfortunately, typical 3D scanning setups cannot provide any one‐to‐one mapping between measured samples in static regions: in particular, both extrinsic and intrinsic sensor parameters may vary over time while sensor noise and outliers additionally corrupt the data. In this paper, we adopt a multi‐scale approach to robustly tackle these issues. Starting from two point clouds, we first remove outliers using a probabilistic operator. Then, we detect the actual change using the implicit surface defined by the point clouds under a Growing Least Square reconstruction that, compared to the classical proximity measure, offers a more robust change/no‐change characterization near the temporal intersection of the scans and in the areas exhibiting different sampling density and direction. The resulting classification is enhanced with a spatial reasoning step to solve critical geometric configurations that are common in man‐made environments. We validate our approach on a synthetic test case and on a collection of real data sets acquired using commodity hardware. Finally, we show how 3D reconstruction benefits from the resulting precise change/no‐change segmentation.) <|cite_end|> propagate change using spatial and photoconsistency constraints. Our approach follows the same outline, but differs in the key step of change propagation, through a novel geometric twist. Thus, our method is illumination invariant and can be applied to complex, open-set environments under varying illumination conditions.
{\PAR{SLAM Methods for Dynamic Object Segmentation.}} When addressing dynamic scenes, tracking dynamic objects can be part of SLAM-based techniques. In <|cite_start|> (Reference: Unsupervised object segmentation through change detection in a long term autonomy scenario: In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.) <|cite_end|>, dynamic parts of the scenes are recovered and a classifier is trained on them to distinguish between static and non-static parts. Semantic SLAM for dynamic environments is presented in <|cite_start|> (Reference: Semantic monocular SLAM for highly dynamic environments: Recent advances in monocular SLAM have enabled real-time capable systems which run robustly under the assumption of a static environment, but fail in presence of dynamic scene changes and motion, since they lack an explicit dynamic outlier handling. We propose a semantic monocular SLAM framework designed to deal with highly dynamic environments, combining feature-based and direct approaches to achieve robustness under challenging conditions. The proposed approach exploits semantic information extracted from the scene within an explicit probabilistic model, which maximizes the probability for both tracking and mapping to rely on those scene parts that do not present a relative motion with respect to the camera. We show more stable pose estimation in dynamic environments and comparable performance to the state of the art on static sequences on the Virtual KITTI and Synthia datasets.) <|cite_end|> <|cite_start|> (Reference: Sof-slam: A semantic visual slam for dynamic environments: Simultaneous Localization and Mapping (SLAM) plays an important role in the computer vision and robotics field. The traditional SLAM framework adopts a strong static world assumption for analysis convenience. How to cope with dynamic environments is of vital importance and attracts more attentions. Existing SLAM systems toward dynamic scenes either solely utilize semantic information, solely utilize geometry information, or naively combine the results from them in a loosely coupled way. In this paper, we present SOF-SLAM: Semantic Optical Flow SLAM, a visual semantic SLAM system toward dynamic environments, which is built on RGB-D mode of ORB-SLAM2. A new dynamic features detection approach called semantic optical flow is proposed, which is a kind of tightly coupled way and can fully take advantage of feature’s dynamic characteristic hidden in semantic and geometry information to remove dynamic features effectively and reasonably. The pixel-wise semantic segmentation results generated by SegNet serve as mask in the proposed semantic optical flow to get a reliable fundamental matrix, which is then used to filter out the truly dynamic features. Only the remaining static features are reserved in the tracking and optimization module to achieve accurate camera pose estimation in dynamic environments. Experiments on public TUM RGB-D dataset and in real-world environment are conducted. Compared with ORB-SLAM2, the proposed SOF-SLAM achieves averagely 96.73% improvements in high-dynamic scenarios. It also outperforms the other four state-of-the-art SLAM systems which cope with the dynamic environments.) <|cite_end|>. In <|cite_start|> (Reference: MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects: We present MaskFusion, a real-time, object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene. MaskFusion recognizes, segments and assigns semantic class labels to different objects in the scene, while tracking and reconstructing them even when they move independently from the camera. As an RGB-D camera scans a cluttered scene, image-based instance-level semantic segmentation creates semantic object masks that enable real-time object recognition and the creation of an object-level representation for the world map. Unlike previous recognition-based SLAM systems, MaskFusion does not require known models of the objects it can recognize, and can deal with multiple independent motions. MaskFusion takes full advantage of using instance-level semantic segmentation to enable semantic labels to be fused into an object-aware map, unlike recent semantics enabled SLAM systems that perform voxel-level semantic segmentation. We show augmented-reality applications that demonstrate the unique features of the map output by MaskFusion: instance-aware, semantic and dynamic.) <|cite_end|>, the authors first segment objects and track them separately. In a similar vein to our research, <|cite_start|> (Reference: Toward lifelong object segmentation from change detection in dense rgb-d maps: In this paper, we present a system for automatically learning segmentations of objects given changes in dense RGB-D maps over the lifetime of a robot. Using recent advances in RGB-D mapping to construct multiple dense maps, we detect changes between mapped regions from multiple traverses by performing a 3-D difference of the scenes. Our method takes advantage of the free space seen in each map to account for variability in how the maps were created. The resulting changes from the 3-D difference are our discovered objects, which are then used to train multiple segmentation algorithms in the original map. The final objects can then be matched in other maps given their corresponding features and learned segmentation method. If the same object is discovered multiple times in different contexts, the features and segmentation method are refined, incorporating all instances to better learn objects over time. We verify our approach with multiple objects in numerous and varying maps.) <|cite_end|> <|cite_start|> (Reference: Toward object discovery and modeling via 3-d scene comparison: The performance of indoor robots that stay in a single environment can be enhanced by gathering detailed knowledge of objects that frequently occur in that environment. We use an inexpensive sensor providing dense color and depth, and fuse information from multiple sensing modalities to detect changes between two 3-D maps. We adapt a recent SLAM technique to align maps. A probabilistic model of sensor readings lets us reason about movement of surfaces. Our method handles arbitrary shapes and motions, and is robust to lack of texture. We demonstrate the ability to find whole objects in complex scenes by regularizing over surface patches.) <|cite_end|> <|cite_start|> (Reference: An object-based semantic world model for long-term change detection and semantic querying: Recent years have seen rising interest in robotic mapping algorithms that operate at the level of objects, rather than two- or three-dimensional occupancy. Such “semantic maps” permit higher-level reasoning than occupancy maps, and are useful for any application that involves dealing with objects, including grasping, change detection, and object search. We describe and experimentally verify such a system aboard a mobile robot equipped with a Microsoft Kinect RGB-D sensor. Our representation is object-based, and makes uniquely weak assumptions about the quality of the perceptual data available; in particular, we perform no explicit object recognition. This allows our system to operate in large, dynamic, and uncon-strained environments, where modeling every object that occurs (or might occur) is impractical. Our dataset, which is publicly available, consists of 67 autonomous runs of our robot over a six-week period in a roughly 1600m2 office environment. We demonstrate two applications built on our system: semantic querying and change detection.) <|cite_end|> aim at discovering objects through change observation on an object-level. However, these works build their methods upon a SLAM-based basis. Our method is complementary to SLAM-based techniques since these methods demand the recording of the object's actual movement in front of the camera. On the other hand, our method needs two 3D models (reference scan and rescan), and the associated camera poses, which are acquired over long time intervals. Thus, objects might have moved, appeared, or disappeared without their movement being explicitly recorded.
{\PAR{3D Object Discovery.}} Our problem can be conceived as a 3D object discovery technique when declaring as an object everything that can be moved, since movement is an inherent property of objects. Concerning unsupervised object discovery, the authors of <|cite_start|> (Reference: Object discovery in 3D scenes via shape analysis: We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.) <|cite_end|> focus on identifying parts of the input mesh as candidate objects. They then classify them as an object or clutter. More similarly to our work, <|cite_start|> (Reference: On-the-fly detection of novel objects in indoor environments: Many robotic applications require the detection of new objects in known environments. Common approaches navigate in the environment using pre-defined waypoints and segment the scene at these waypoints. Without knowing where to find new objects, this process can be time-consuming and prone to detecting false positives. To overcome these limitations we propose an approach that combines navigation and attention in order to detect novel objects rapidly. We exploit the octomap, created by the robot while it navigates in the environment, as a pre-attention filter to suggest potential regions of interest. These regions are then visited to obtain a close-up view for better object detection and recognition. We evaluate our approach in a simulated as well as a real environment. The experiments show that our approach outperforms previous approaches in terms of runtime and the number of segmentation actions required to find all novel objects in the environment.) <|cite_end|> <|cite_start|> (Reference: Robust and efficient object change detection by combining global semantic information and local geometric verification: Identifying new, moved or missing objects is an important capability for robot tasks such as surveillance or maintaining order in homes, offices and industrial settings. However, current approaches do not distinguish between novel objects or simple scene readjustments nor do they sufficiently deal with localization error and sensor noise. To overcome these limitations, we combine the strengths of global and local methods for efficient detection of novel objects in 3D reconstructions of indoor environments. Global structure, determined from 3D semantic information, is exploited to establish object candidates. These are then locally verified by comparing isolated geometry to a reference reconstruction provided by the task. We evaluate our approach on a novel dataset containing different types of rooms with 31 scenes and 260 annotated objects. Experiments show that our proposed approach significantly outperforms baseline methods.) <|cite_end|> extract as objects all the novel additions to the scene. Indeed, by scene comparison, they discover and label as an object anything that has been added between two scans. In contrast, our proposed method does not restrict itself only to added objects, but rather discovers all the objects that have changed (added, moved or removed). <|paper_end|> | [
"<|reference_start|> Change detection in 3D point clouds acquired by a mobile mapping system: Abstract. Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster–Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions. <|reference_end|>",
"<|reference_start|> Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques. <|reference_end|>",
"<|reference_start|> Image Based Detection of Geometric Changes in Urban Environments: In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques. <|reference_end|>",
"<|reference_start|> Detection of geometric temporal changes in point clouds: Detecting geometric changes between two 3D captures of the same location performed at different moments is a critical operation for all systems requiring a precise segmentation between change and no‐change regions. Such application scenarios include 3D surface reconstruction, environment monitoring, natural events management and forensic science. Unfortunately, typical 3D scanning setups cannot provide any one‐to‐one mapping between measured samples in static regions: in particular, both extrinsic and intrinsic sensor parameters may vary over time while sensor noise and outliers additionally corrupt the data. In this paper, we adopt a multi‐scale approach to robustly tackle these issues. Starting from two point clouds, we first remove outliers using a probabilistic operator. Then, we detect the actual change using the implicit surface defined by the point clouds under a Growing Least Square reconstruction that, compared to the classical proximity measure, offers a more robust change/no‐change characterization near the temporal intersection of the scans and in the areas exhibiting different sampling density and direction. The resulting classification is enhanced with a spatial reasoning step to solve critical geometric configurations that are common in man‐made environments. We validate our approach on a synthetic test case and on a collection of real data sets acquired using commodity hardware. Finally, we show how 3D reconstruction benefits from the resulting precise change/no‐change segmentation. <|reference_end|>"
] | [
13,
15,
20,
21
] | {"<|cite_1|>": "ss-929856", "<|cite_2|>": "arxiv-225422", "<|multi_cite_3_1|>": "ss-972816", "<|multi_cite_3_2|>": "arxiv-219060", "<|multi_cite_3_3|>": "ss-775707", "<|cite_4|>": "ss-904724", "<|cite_5|>": "ss-901844", "<|cite_6|>": "arxiv-219060", "<|cite_7|>": "arxiv-219060", "<|multi_cite_8_1|>": "ss-768561", "<|multi_cite_8_2|>": "ss-901844", "<|multi_cite_8_3|>": "ss-1447817", "<|multi_cite_8_4|>": "ss-904724", "<|multi_cite_8_5|>": "ss-1440694", "<|multi_cite_8_6|>": "ss-901843", "<|multi_cite_9_1|>": "ss-901844", "<|multi_cite_9_2|>": "ss-768561", "<|multi_cite_9_3|>": "ss-1447817", "<|multi_cite_10_1|>": "ss-901844", "<|multi_cite_10_2|>": "ss-1786589", "<|multi_cite_11_1|>": "ss-901844", "<|multi_cite_11_2|>": "ss-1786589", "<|cite_12|>": "ss-2429251", "<|multi_cite_13_1|>": "ss-777665", "<|multi_cite_13_2|>": "ss-1808073", "<|cite_14|>": "arxiv-156215", "<|multi_cite_15_1|>": "ss-774739", "<|multi_cite_15_2|>": "ss-1290050", "<|multi_cite_15_3|>": "ss-1535189", "<|cite_16|>": "ss-996977", "<|multi_cite_17_1|>": "ss-1786590", "<|multi_cite_17_2|>": "ss-774741"} |
1512.00824 | <|paper_start|> Title: Equal-image-size source partitioning: Creating strong Fano's inequalities for multi-terminal discrete memoryless channels
Abstract: Equal-image-size source partitioning: Creating strong Fano's inequalities for multi-terminal discrete memoryless channels: This paper introduces equal-image-size source partitioning, a new tool for analyzing channel and joint source-channel coding in a multi-terminal discrete memoryless channel environment. Equal-image-size source partitioning divides the source (combination of messages and codewords) into a sub-exponential number of subsets. Over each of these subsets, the exponential orders of the minimum image sizes of most messages are roughly equal to the same entropy term. This property gives us the strength of minimum image sizes and the flexibility of entropy terms. Using the method of equal-image-size source partitioning, we prove separate necessary conditions for the existence of average-error and maximum-error codes. These necessary conditions are much stronger than the standard Fano's inequality, and can be weakened to render versions of Fano's inequality that apply to codes with non-vanishing error probabilities. To demonstrate the power of this new tool, we employ the stronger average-error version of Fano's inequality to prove the strong converse for the discrete memoryless wiretap channel with decaying leakage, which heretofore has been an open problem.
Introduction
A minimum $\mu$-image of a set $A \subseteq \mcf{X}^n$ over a discrete memoryless channel (DMC), specified by the conditional distribution $P_{Y|X}$, is the smallest set $B \subset \mcf{Y}^n$ such that $P_{Y|X}^n(B|x^n) \geq \mu$ for all $x^n \in A$.
For any $\epsilon$-maximum error, $n$-length code over $P_{Y|X}$, the decoding subset of $\mcf{Y}^n$ for a particular message value must constitute a $\mu$-image of the subset of $\mcf{X}^n$ corresponding to the message value, for some small $\mu$.
Noting this, the intuitive sphere packing argument for channel capacity naturally extends by interpreting the minimum $\mu$-image as the ``sphere'' of the smallest size mapped to from the codewords of an $\epsilon$-error code (see section~\ref{sec:background_image} for more details).
Expressing capacity results in terms of minimum image sizes has many advantages, such as allowing for expressions of channel capacity as a function of~$\epsilon$.
Furthermore because images sizes are not functions of the distribution of $X^n$, they are apt for use in joint source-channel problems for which messages may not be uniformly distributed.
Unfortunately there are also significant drawbacks to analysis by minimum image size.
For instance, there is no currently known method by which to calculate the minimum image size of any arbitrary set other than a singleton.
This is perhaps why it is common to instead employ ``spheres'' whose sizes can be expressed in terms of entropies in the sphere packing argument. Entropies allow for simple algebraic manipulations and hence lead to simple representations of the capacity of many basic channels.
These two different types of characterizations are often referred to as image size characterization and entropy size characterization, and the sets of possible image size characterizations and entropy size characterizations are referred to as the achievable exponent and achievable entropy regions, respectively.
In order to take advantage of image size characterizations, we need to express minimum image sizes in terms of entropies. As Csisz{\'a}r and K{\"o}rner note in \cite[p.~339]{CK} though
\begin{quote}
\emph{We shall see in Chapter 16 that the corresponding image size characterizations can be used to prove strong converse
results for source networks and also to solve channel network
problems. In this respect, it is important that the sets of achievable
entropy resp. exponent triples have the same two dimensional
projections see Theorems 15.11 and 15.20. The two sets, however, need not be equal; their relationship is described by Corollary 15.13. }
\end{quote}
The primary motivation of this work is to rectify this incongruity, and in doing so provide new stronger necessary conditions for reliable communications that have both the robustness of image size techniques while maintaining the algebraic flexibility of entropies.
In a three-terminal setting with a single message, it has been well established that the two-dimensional projections of image size characterization and the entropy characterization are equal~\cite[Theorem~15.11]{CK}. Results beyond three terminals are rare and partial. In addition, in multi-terminal settings there typically exist multiple receivers which are only required to decode a subset of the messages. In an earlier paper <|cite_start|> (Reference: Equating the achievable exponent region to the achievable entropy region by partitioning the source: In this paper we investigate the image size characterization problem. We show that any arbitrary source set may be decomposed into sets whose image size characterization is the same as its entropy characterization. We also show that the number of these sets required is small enough that one may consider that from a coding perspective the achievable entropy region and achievable exponent region are equal. This has an impact on many source networks and network problems whose solution heretofore could not have the image size characterization applied to them.) <|cite_end|>, we have shown that every source set may be partitioned into $\mcf{O}(n)$ subsets, within each the entropy and image size characterizations are equal. The first significant contribution of the current paper is to extend this partitioning method to simultaneously account for multiple messages and multiple receivers. Over every partitioning subset, the image size characterization and the entropy characterization are equal in that the exponential orders of the minimum image sizes for nearly all messages are equal to the same entropy quantity. Furthermore, the partition results in the distribution of the messages being nearly uniform over every partitioning subset, while the number of partitioning subsets remains polynomial in $n$ $(\mcf{O}(n^5))$.
Our second significant contribution, new necessary conditions for reliable communications over multi-terminal DMCs, then follows. These necessary conditions
(see Theorems~\ref{thm:maxerr_fano} and~\ref{thm:isfd})
are direct consequences of the equal-image-size partitions described above. More specifically, by the blowing up lemma~\cite[Ch.~5]{CK}, the exponential order of the minimum image size is effectively invariant to the value of $\epsilon$. Due to the equality between image size and entropy characterizations by our partitioning approach, the entropy terms in the sphere packing argument for codes with small error probabilities are nearly equivalent to those for codes with larger error probabilities. This suggests that the necessary conditions of reliable communications expressed in terms of these entropies may be made effectively invariant to the decoding error probabilities. Another way to look at these necessary conditions is that they imply all codes may only increase their rates by allowing transmissions which have nearly zero probability of decoding. Errors of this type have previously been considered by Effros et al. in regards to composite channels, where the probability of an error of this type occurring was deemed the outage probability <|cite_start|> (Reference: Generalizing capacity: new definitions and capacity theorems for composite channels: We consider three capacity definitions for composite channels with channel side information at the receiver. A composite channel consists of a collection of different channels with a distribution characterizing the probability that each channel is in operation. The Shannon capacity of a channel is the highest rate asymptotically achievable with arbitrarily small error probability. Under this definition, the transmission strategy used to achieve the capacity must achieve arbitrarily small error probability for all channels in the collection comprising the composite channel. The resulting capacity is dominated by the worst channel in its collection, no matter how unlikely that channel is. We, therefore, broaden the definition of capacity to allow for some outage. The capacity versus outage is the highest rate asymptotically achievable with a given probability of decoder-recognized outage. The expected capacity is the highest average rate asymptotically achievable with a single encoder and multiple decoders, where channel side information determines the channel in use. The expected capacity is a generalization of capacity versus outage since codes designed for capacity versus outage decode at one of two rates (rate zero when the channel is in outage and the target rate otherwise) while codes designed for expected capacity can decode at many rates. Expected capacity equals Shannon capacity for channels governed by a stationary ergodic random process but is typically greater for general channels. The capacity versus outage and expected capacity definitions relax the constraint that all transmitted information must be decoded at the receiver. We derive channel coding theorems for these capacity definitions through information density and provide numerical examples to highlight their connections and differences. We also discuss the implications of these alternative capacity definitions for end-to-end distortion, source-channel coding, and separation.) <|cite_end|>.
From our new necessary conditions we may obtain more traditional, stronger versions of Fano's inequality. The strong inequalities with regards to average probability of error work for nearly uniform messages (see Corollary~\ref{cor:uniform}) and information-stable messages (see Corollary~\ref{cor:stable}), while the maximum-error version (see Corollary~\ref{cor:max_fano}) applies universally. We deem these particular results as strong Fano's inequalities because we may write them in the form of the standard Fano's inequality except for the error term being replaced by a term which almost universally vanishes. Much of the complexity in regards to this paper revolves around crafting necessary conditions which are easy to apply, and apply directly to many active research problems. To demonstrate the power of the results, we present as an application example a simple solution to the strong converse problem for the discrete-memoryless wiretap channel (DM-WTC) with vanishing leakage, which heretofore has been an open problem.
We organize the rest of the paper as follows. Background on the methods used and similar approaches will be discussed first in section~\ref{sec:background}. A preview of our main results will be provided in section~\ref{sec:preview} with an example showing application of the strong average-error Fano's inequality to prove the strong converse for the DM-WTC.
The mathematical machinery that we employ to establish equal-image-size source partitioning will be developed in sections~\ref{sec:partition} and~\ref{sec:lemmas}. The proposed equal-image-size source partition will be developed in section~\ref{sec:mt}. The new necessary conditions for reliable communications and strong Fano's inequalities will come in section~\ref{sec:fanos}. Finally we will conclude this paper in section~\ref{sec:conclusion} with a brief list of some basic multi-terminal DMCs to which our results immediately apply.
\subsection{A note on notation}\label{sec:notation}
The notation used in this paper mostly follows that employed in <|cite_start|> (Reference: Information theory: Coding theorems for discrete memoryless Systems: Csiszr and Krner's book is widely regarded as a classic in the field of information theory, providing deep insights and expert treatment of the key theoretical issues. It includes in-depth coverage of the mathematics of reliable information transmission, both in two-terminal and multi-terminal network scenarios. Updated and considerably expanded, this new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics. The presentations of all core subjects are self contained, even the advanced topics, which helps readers to understand the important connections between seemingly different problems. Finally, 320 end-of-chapter problems, together with helpful solving hints, allow readers to develop a full command of the mathematical techniques. It is an ideal resource for graduate students and researchers in electrical and electronic engineering, computer science and applied mathematics.) <|cite_end|>, except for example the mutual information between a pair of random variables $X$ and $Y$ is written in the more common notation of $I(X;Y)$. Moreover, the notation for conditional entropy will be slightly abused
throughout the paper. Within, when a quantity such as $H(Y^n | X^n \in
A)$ is expressed it will mean $H(Y^n | E=1)$, where $E$ is an indicator random
variable taking the value $1$ if $X^n \in A$ and $0$ if not.
To simplify writing, let $[i:j]$ denote the set of integers starting at $i$ and ending at $j$, inclusively. When we refer to $\mcf{M}$ as an index set, we restrict $\mcf{M}$ to be discrete. A random index is a random variable distributed over an index set. Let $M_1, M_2, \ldots, M_J$ be $J$ random indices joint distributed over $\mcf{M}_1 \times \mcf{M}_2, \times \cdots \times \mcf{M}_J$. For any $S \subseteq [1:J]$, we write $\mcf{M}_{S}$ and $M_S$ as shorthand forms of $\text{\large $\times$}_{j \in S} \mcf{M}_j$ and $(M_j)_{j \in S}$, respectively.
Consider a pair of discrete random variables $X$ and $Y$ over alphabets $\mcf{X}$ and $\mcf{Y}$, respectively. For any $A \subseteq \mcf{X}^n$ such that $P_{X^n}(A)>0$, whenever there is no ambiguity we use $P_{Y^n|X^n \in A}(y^n)$ to denote $\Pr\{Y^n = y^n | X^n \in A\}$ for brevity.
For any $\eta \in [0,1]$, a set $B \subseteq \mcf{Y}^n$ is called an \emph{$\eta$-image} of $A \subseteq \mcf{X}^n$ over the DMC $P_{Y|X}$ \cite[Ch. 15]{CK} if $P^n_{Y|X}(B| x^n) \geq \eta$ for every $x^n \in A$. On the other hand, $B$ is called an \emph{$\eta$-quasi-image} of $A$ over $P_{Y|X}$ \cite[Problem~15.13]{CK} if $P_{Y^n | X^n \in A}(B) \geq \eta$. The minimum size of $\eta$-images of $A$ over $P_{Y|X}$ will be denoted by $g^n_{Y|X}(A,\eta)$, while the minimum size of $\eta$-quasi-images of $A$ over $P_{Y|X}$ will be denoted by $\bar g^n_{Y|X}(A,\eta)$.
Related Work
\label{sec:background}
\subsection{Fano's inequality}
Fano's inequality is one of the most widely used inequalities in the field of information theory. First appearing in Fano's class notes, the inequality can be used to relate the entropy of a message $M$, distributed over an index set $\mcf{M}$, conditioned on a reconstruction $\hat M$ with the probability of error of that reconstruction $\epsilon$. The exact inequality
\[
\frac{1}{n}H(M|\hat M) \leq \frac{\epsilon}{n} \card{M} + \frac{1}{n},
\]
can be tight for specific $M$, $\hat M$, and $\epsilon$. It is most commonly used in proving converses of coding theorems, where when combined with the data processing inequality~\cite[Lemma~3.11]{CK}, results in
\[
\frac{1}{n} H(M) \leq \frac{1}{n} I(M;Y^n) + \frac{\epsilon}{n} \card{M} + \frac{1}{n}.
\]
We then can say if $\epsilon \rightarrow 0$ and $\aexp{\mcf{M}}=R$ is a finite constant, $\frac{1}{n} I(M;Y^n)$ asymptotically upper bounds $\frac{1}{n} H(M)$. In channel coding problems, the message $M$ is uniform and so $\frac{1}{n} I(M;Y^n)$ asymptotically upper bounds the code rate $R$.
Fano's inequality also works in joint source-channel coding problems, as is used in proving the source-channel separation theorem for the two-terminal DMC~\cite[Pg.~221]{CT}. The most general form of Fano's inequality to date is due to Han and Verd{\'u} <|cite_start|> (Reference: Generalizing the Fano inequality: The Fano inequality gives a lower bound on the mutual information between two random variables that take values on an M-element set, provided at least one of the random variables is equiprobable. The authors show several simple lower bounds on mutual information which do not assume such a restriction. In particular, this ran be accomplished by replacing log M with the infinite-order Renyi entropy in the Fano inequality. Applications to hypothesis testing are exhibited along with bounds on mutual information in terms of the a priori and a posteriori error probabilities. >) <|cite_end|>, who removed the constraint that at least one of the random variables involved in the inequality be discrete.
As Wolfowitz first showed, even with a non-vanishing decoding error probability, the upper bound on the rate of messages that can be transmitted through a two-terminal DMC is asymptotically equal to that with a vanishing error probability <|cite_start|> (Reference: The coding of messages subject to chance errors: Throughout this paper we assume that all "alphabets" involved contain exactly two symbols, say 0 and 1. What this means will be apparent in a moment. This assumption is made only in the interest of simplicity of exposition, and the changes needed when this assumption is not fulfilled will be obvious. Suppose that a person has a vocabulary of S words (or messages), any or all of which he may want to transmit, in any frequency and in any order, over a "noisy channel". For example, S could be the number of words in the dictionary of a language, provided that it is forbidden to coin words not in the dictionary. What a "noisy channel" is will be described in a moment. Here we want to emphasize that we do not assume anything about the frequency with which particular words are transmitted, nor do we assume that the words to be transmitted are selected by any random process (let alone that the distribution function of the random process is known). Let the words be numbered in some fixed manner. Thus transmitting a word is equivalent to transmitting one of the integers 1, 2, S. We shall now explain wtiat is meant by a "noisy channel" of memory m. A sequence of (m W 1) elements, each zero or one, will be called an a-sequence. A function p, defined on the set of all a-sequences, and such that always 0 =< p -< 1, is associated with the channel and called the channel probability function. A sequence of n elements, each of which is zero or one, will be call an x-sequence. To describe the channel, it will be sufficient to describe how it transmits any giyen x-sequence, say xl. Let 1 be the a-sequence of the first (m W 1) elements of xl. The channel "performs" a chance experiment with possible outcomes 1 and 0 and respective probabilities p(a) and (1 p()), and transmits the outcome of this chance experiment. It then performs another chance experiment, independently of the first, with possible outcomes 1 and 0 and respective probabilities p() and (1 p(a2)), where 2 is the a-sequence of the 2nd, 3rd, (m W 2) elements of the sequence x. This is repeated until (n m) independent experiments have been performed. The probability of the outcome one in the i experiment is p(), where is the a-sequence of the ith, (i W 1)St, (i W m) elements of xl. The x-sequence xl is called the transmitted sequence. The chance sequence Y(x) of outcomes of the experiments in consecutive order is called the received sequence. Any sequence of (n m) elements, each zero or one, will be called a y-sequence. Let yl be any y-sequence. If P{Y(x) y} > 0 (the symbol) <|cite_end|>. Wolfowitz introduced the concept of capacity dependent upon error, usually denoted by $C(\epsilon)$. Following the terminology of Csisz{\'a}r and K{\"o}rner~\cite[Pg.~93]{CK}, a converse result showing $C(\epsilon) = \lim_{\epsilon' \rightarrow 0} C(\epsilon')$ for all $\epsilon \in (0,1)$ is called a \emph{strong converse}. Verd{\'u} and Han <|cite_start|> (Reference: A General formula for channel capacity: A formula for the capacity of arbitrary single-user channels without feedback (not necessarily information stable, stationary, etc.) is proved. Capacity is shown to equal the supremum, over all input processes, of the input-output inf-information rate defined as the liminf in probability of the normalized information density. The key to this result is a new converse approach based on a simple new lower bound on the error probability of m-ary hypothesis tests among equiprobable hypotheses. A necessary and sufficient condition for the validity of the strong converse is given, as well as general expressions for /spl epsiv/-capacity. >) <|cite_end|> showed the stronger assertions that this is true for all finite $n$, and that all rates larger must have error probability approaching unity hold for all two-terminal DMCs.
Clearly though the bound in Fano's inequality is influenced by the probability of error $\epsilon$. This dependence makes Fano's inequality ill-suited for application to channel codes with non-vanishing error probabilities. This in turn has lead to other different methods of proving strong converses, such as the meta-converse proposed by Polyanskiy et al. <|cite_start|> (Reference: {Channel coding rate in the finite blocklength regime: This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ¿ isclosely approximated by C - ¿(V/n) Q-1(¿) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.) <|cite_end|>. The meta-converse leverages the idea that any decoder can be considered as a binary hypothesis test between the correct codeword set and the incorrect codeword set. Bounding the decoding error by the best binary hypothesis test, new bounds, which are relatively tight even for small values of $n$, can be established. In contrast to the original version of Fano's inequality, the stronger versions presented in Corollaries~\ref{cor:max_fano},~\ref{cor:uniform}, and~\ref{cor:stable} directly apply to codes with non-vanishing decoding error probabilities over multi-terminal DMCs.
Fano's inequality is also problematic when used in regards to characterizing joint source-channel coding (JSCC) problems. Using Fano's inequality for JSCC problem necessitates either the restriction of vanishing error probabilities, or that messages (sources) whose probability exponents converge to the sources' entropy rates. Both of these restrictions are limiting as results by Kostina et al. <|cite_start|> (Reference: Variable-length compression allowing errors: This paper studies the fundamental limits of the minimum average length of lossless and lossy variable-length compression, allowing a nonzero error probability $\epsilon$, for lossless compression. We give non-asymptotic bounds on the minimum average length in terms of Erokhin's rate-distortion function and we use those bounds to obtain a Gaussian approximation on the speed of approach to the limit which is quite accurate for all but small blocklengths: $$(1 - \epsilon) k H(\mathsf S) - \sqrt{\frac{k V(\mathsf S)}{2 \pi} } e^{- \frac {(Q^{-1}(\epsilon))^2} 2 }$$ where $Q^{-1}(\cdot)$ is the functional inverse of the standard Gaussian complementary cdf, and $V(\mathsf S)$ is the source dispersion. A nonzero error probability thus not only reduces the asymptotically achievable rate by a factor of $1 - \epsilon$, but this asymptotic limit is approached from below, i.e. larger source dispersions and shorter blocklengths are beneficial. Variable-length lossy compression under an excess distortion constraint is shown to exhibit similar properties.) <|cite_end|> suggest that allowing non-vanishing error probabilities in conjunction with compression may lead to increased rates. In contrast to the original version of Fano's inequality, the necessary conditions supplied by Theorems~\ref{thm:maxerr_fano} and~\ref{thm:isfd} can be used to upper bound such rate gains in JSCC problems over multi-terminal DMCs.
\subsection{Image size characterizations}\label{sec:background_image}
Image size characterizations, originally introduced in G{\'a}cs and K{\"o}rner and Ahlswede et al. <|cite_start|> (Reference: Bounds on conditional probabilities with applications in multi-user communication: We consider a sequence {Zg}i~ 1 of independent, identically distributed random variables where each Z i is a pair (Xi, Y/). For any pair of events {X"~ ~r { Y"~ N} satisfying Pr(Y" e NIX" s d ) > 1 ~ and for any non-negative real c we investigate how small P r ( Y " ~ ) can be in case P r ( X " e d ) is larger than 2 -"c. We give the full answer to a generalized form of this question. These estimates enable us to prove strong converses of the coding theorems for two recently emerged questions in Shannon's information theory, i.e. the source coding problem with side information and the coding problem for the degraded broadcast channel.) <|cite_end|>, are of particular importance for DMCs due to the blowing up lemma~\cite[Ch. 5]{CK}.
Margulis first introduced the blowing up lemma to study hop distance in hyper-connected graphs.
In the context of DMCs, it can be used to show that any $\alpha_n$-image with $\alpha_n$ not decaying too fast is close in size to a $\beta_n$-image with $\beta_n$ not approaching unity too fast (see \cite[Lemma~6.6]{CK} or Lemma~\ref{lem:6.6}).
Ahlswede <|cite_start|> (Reference: Every bad code has a good subcode: A local converse to the coding theorem: ) <|cite_end|> used the blowing up lemma to prove a local strong converse for maximal error codes over a two-terminal DMC, showing that all bad codes have a good subcode of almost the same rate.
Using the same lemma, K{\"o}rner and Martin <|cite_start|> (Reference: Images of a set via two channels and their role in multi-user communication: A technique is presented to determine the region of achievable rates for some source and channel networks. This technique is applied to the solution of a source:network problem that seems to be the simplest illustration of a new typical difficulty in coding for source networks: namely, when the same encoding of a source is required to meet the conflicting demands of 1) supplying side-information to the decoder of another source, and 2) providing direct-information to its own decoder in company with other side-information.) <|cite_end|> developed a general framework for determining the achievable rates of a number of source and channel networks. On the other hand, many of the strong converses for some of the most fundamental multi-terminal DMCs studied in literature were proven using image size characterization techniques. K{\"o}rner and Martin <|cite_start|> (Reference: General Broadcast Channels with Degraded Message Sets: A broadcast channel with one sender and two receivers is considered. Three independent messages are to be transmitted over this channel: one common message which is meant for both receivers, and one private message for each of them. The coding theorem and strong converse for this communication situation is proved for the case when one of the private messages has rate zero.) <|cite_end|> employed such a technique to prove the strong converse of a discrete memoryless broadcast channel with degraded message sets. Dueck used these methods to prove the strong converse of the discrete memoryless multiple access channel with independent messages.
For a detailed overview of image size characterization techniques, see~\cite[Chs.~5, 6, 15, 16]{CK}.
Here we briefly summarize the sphere packing argument in~\cite[Ch. 6]{CK} to motivate the development of the results in this paper. Consider sending a uniform message $M$ from the message set $\mcf{M}$ over a two-terminal DMC specified by $P_{Y|X}$ using
a $(n,\epsilon)$-maximal error channel code $(f^n,\varphi^n)$ with $\epsilon \in (0,1)$.
For the purposes of simple discussion here, assume that the encoder $f^n: \mcf{M} \rightarrow \mcf{X}^n$ and the decoder $\varphi^n : \mcf{Y}^n \rightarrow \mcf{M}$ are both deterministic. Let $A \defn \{f^n(m) : m \in \mcf{M}\}$ denote the set of codewords used by $f^n$.
Pick $\mu>0$ such that $\mu+\epsilon<1$ and let $B \subseteq \mcf{Y}^n$ be a minimum $(\mu+\epsilon)$-image of $A$ over $P_{Y|X}$. That is, $g^n_{Y|X}(A, \mu+\epsilon) = \abs{B}$. Let $\varphi^{-n}(m)$ denote the decoding region for the message $m \in \mcf{M}$. The
maximum error requirement implies that $P^n_{Y|X} \left(\varphi^{-n}(m) \middle| f^n(m) \right) \geq 1 - \epsilon$ for all $m \in \mcf{M}$. Hence we have $P^n_{Y|X} \left(\varphi^{-n}(m) \cap B \middle| f^n(m) \right) \geq \mu$. In other words, this means that $\varphi^{-n}(m) \cap B$ is a $\mu$-image of the singleton $\{f^n(m)\}$, and hence $\abs{\varphi^{-n}(m) \cap B} \geq g^n_{Y|X}(f^n(m), \mu)$ for every $m \in \mcf{M}$. It is clear now that the subsets $\varphi^{-n}(m) \cap B$ for $m \in \mcf{M}$ serve as the ``spheres'' in the sphere packing argument. More specifically,
\[
g^n_{Y|X}(A, \mu+\epsilon) = \abs{B} = \sum_{m \in \mcf{M}} \abs{\varphi^{-n}(m) \cap B}
\geq \abs{\mcf{M}} \cdot \min_{m \in \mcf{M}} g^n_{Y|X}(f^n(m), \mu)
\]
which implies
\begin{equation} \label{eq:dmc_rate_bound}
\aexp{\mcf{M}} \leq \cexp{g^n_{Y|X}(A, \mu+\epsilon)} - \min_{m \in \mcf{M}} \cexp{g^n_{Y|X}(f^n(m), \mu)}.
\end{equation}
As a result, we have just obtained an upper bound on the rate of the $(n,\epsilon)$-maximal error channel code in terms of minimum image sizes. Moreover as a consequence of the blowing up lemma (see \cite[Lemma~6.6]{CK} or Lemma~\ref{lem:6.6}), the terms on the right hand side of~\eqref{eq:dmc_rate_bound} remain roughly the same regardless of the value of $\epsilon$ within the range of $(0,1)$. Thus, unlike the standard Fano's inequality, this bound may be used to establish the strong converse of the DMC.
Nevertheless usefulness of code rate bounds expressed in terms of minimum image sizes, like~\eqref{eq:dmc_rate_bound}, depends upon the availability of simple image size characterizations. As mentioned before, while such characterizations exist for the two-terminal DMC (see \cite[Ch. 6]{CK}) and the three-terminal DMC with a single message (see \cite[Ch. 15]{CK}), simple image size characterizations for more general channels have been largely missing. This motivates us to develop the proposed tool of equal-image-size source partitioning (see Theorem~\ref{thm:cond}) to solve the more general image size characterization problem and to apply this tool to obtain more general necessary conditions of reliable communications over multi-terminal DMCs (see section~\ref{sec:fanos}). <|paper_end|> | [
"<|reference_start|> Generalizing capacity: new definitions and capacity theorems for composite channels: We consider three capacity definitions for composite channels with channel side information at the receiver. A composite channel consists of a collection of different channels with a distribution characterizing the probability that each channel is in operation. The Shannon capacity of a channel is the highest rate asymptotically achievable with arbitrarily small error probability. Under this definition, the transmission strategy used to achieve the capacity must achieve arbitrarily small error probability for all channels in the collection comprising the composite channel. The resulting capacity is dominated by the worst channel in its collection, no matter how unlikely that channel is. We, therefore, broaden the definition of capacity to allow for some outage. The capacity versus outage is the highest rate asymptotically achievable with a given probability of decoder-recognized outage. The expected capacity is the highest average rate asymptotically achievable with a single encoder and multiple decoders, where channel side information determines the channel in use. The expected capacity is a generalization of capacity versus outage since codes designed for capacity versus outage decode at one of two rates (rate zero when the channel is in outage and the target rate otherwise) while codes designed for expected capacity can decode at many rates. Expected capacity equals Shannon capacity for channels governed by a stationary ergodic random process but is typically greater for general channels. The capacity versus outage and expected capacity definitions relax the constraint that all transmitted information must be decoded at the receiver. We derive channel coding theorems for these capacity definitions through information density and provide numerical examples to highlight their connections and differences. We also discuss the implications of these alternative capacity definitions for end-to-end distortion, source-channel coding, and separation. <|reference_end|>",
"<|reference_start|> Information theory: Coding theorems for discrete memoryless Systems: Csiszr and Krner's book is widely regarded as a classic in the field of information theory, providing deep insights and expert treatment of the key theoretical issues. It includes in-depth coverage of the mathematics of reliable information transmission, both in two-terminal and multi-terminal network scenarios. Updated and considerably expanded, this new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics. The presentations of all core subjects are self contained, even the advanced topics, which helps readers to understand the important connections between seemingly different problems. Finally, 320 end-of-chapter problems, together with helpful solving hints, allow readers to develop a full command of the mathematical techniques. It is an ideal resource for graduate students and researchers in electrical and electronic engineering, computer science and applied mathematics. <|reference_end|>",
"<|reference_start|> Generalizing the Fano inequality: The Fano inequality gives a lower bound on the mutual information between two random variables that take values on an M-element set, provided at least one of the random variables is equiprobable. The authors show several simple lower bounds on mutual information which do not assume such a restriction. In particular, this ran be accomplished by replacing log M with the infinite-order Renyi entropy in the Fano inequality. Applications to hypothesis testing are exhibited along with bounds on mutual information in terms of the a priori and a posteriori error probabilities. > <|reference_end|>",
"<|reference_start|> General Broadcast Channels with Degraded Message Sets: A broadcast channel with one sender and two receivers is considered. Three independent messages are to be transmitted over this channel: one common message which is meant for both receivers, and one private message for each of them. The coding theorem and strong converse for this communication situation is proved for the case when one of the private messages has rate zero. <|reference_end|>"
] | [
1,
2,
3,
11
] | {"<|cite_1|>": "arxiv-59960", "<|cite_2|>": "ss-1044738", "<|cite_3|>": "ss-888154", "<|cite_5|>": "ss-1831684", "<|cite_6|>": "ss-2114163", "<|cite_7|>": "ss-906333", "<|cite_8|>": "ss-690050", "<|cite_9|>": "arxiv-56409", "<|cite_11|>": "ss-1511162", "<|cite_13|>": "ss-1322104", "<|cite_14|>": "ss-2470819", "<|cite_15|>": "ss-1030747"} |
2106.11642 | <|paper_start|> Title: Repulsive Deep Ensembles are Bayesian
Abstract: Repulsive Deep Ensembles are Bayesian: Deep ensembles have recently gained popularity in the deep learning community for their conceptual simplicity and efficiency. However, maintaining functional diversity between ensemble members that are independently trained with gradient descent is challenging. This can lead to pathologies when adding more ensemble members, such as a saturation of the ensemble performance, which converges to the performance of a single model. Moreover, this does not only affect the quality of its predictions, but even more so the uncertainty estimates of the ensemble, and thus its performance on out-of-distribution data. We hypothesize that this limitation can be overcome by discouraging different ensemble members from collapsing to the same function. To this end, we introduce a kernelized repulsive term in the update rule of the deep ensembles. We show that this simple modification not only enforces and maintains diversity among the members but, even more importantly, transforms the maximum a posteriori inference into proper Bayesian inference. Namely, we show that the training dynamics of our proposed repulsive ensembles follow a Wasserstein gradient flow of the KL divergence with the true posterior. We study repulsive terms in weight and function space and empirically compare their performance to standard ensembles and Bayesian baselines on synthetic and real-world prediction tasks.
Introduction
\label{sec:intro}
There have been many recent advances on the theoretical properties of sampling algorithms for approximate Bayesian inference, which changed our interpretation and understanding of them. Particularly worth mentioning is the work of <|cite_start|> (Reference: {The variational formulation of the Fokker--Planck equation: The Fokker--Planck equation, or forward Kolmogorov equation, describes the evolution of the probability density for a stochastic process associated with an Ito stochastic differential equation. It ...) <|cite_end|>, who reinterpret Markov Chain Monte Carlo (MCMC) as a gradient flow of the KL divergence over the Wasserstein space of probability measures. This new formulation allowed for a deeper understanding of approximate inference methods but also inspired the inception of new and more efficient inference strategies.
Following this direction, <|cite_start|> (Reference: Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm: We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein's identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.) <|cite_end|> recently proposed the Stein Variational Gradient Descent~(SVGD) method to perform approximate Wasserstein gradient descent. Conceptually, this method, which belongs to the family of particle-optimization variational inference (POVI), introduces a repulsive force through a kernel acting in the parameter space to evolve a set of samples towards high-density regions of the target distribution without collapsing to a point estimate.
Another method which has achieved great success recently are ensembles of neural networks (so-called \emph{deep ensembles}), which work well both in terms of predictive performance <|cite_start|> (Reference: Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles: Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.) <|cite_end|> <|cite_start|> (Reference: Hyperparameter Ensembles for Robustness and Uncertainty Quantification: Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.) <|cite_end|> as well as uncertainty estimation <|cite_start|> (Reference: Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift: Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.) <|cite_end|>, and have also been proposed as a way to perform approximate inference in Bayesian neural networks <|cite_start|> (Reference: Bayesian Deep Learning and a Probabilistic Perspective of Generalization: The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights. Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically underspecified by the data, and can represent many compelling but different solutions. We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead. We also investigate the prior over functions implied by a vague distribution over neural network weights, explaining the generalization properties of such models from a probabilistic perspective. From this perspective, we explain results that have been presented as mysterious and distinct to neural network generalization, such as the ability to fit images with random labels, and show that these results can be reproduced with Gaussian processes. We also show that Bayesian model averaging alleviates double descent, resulting in monotonic performance improvements with increased flexibility. Finally, we provide a Bayesian perspective on tempering for calibrating predictive distributions.) <|cite_end|> <|cite_start|> (Reference: What Are Bayesian Neural Network Posteriors Really Like?: The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.) <|cite_end|>.
That being said, while they might allow for the averaging of predictions over several hypotheses, they do not offer any guarantees for the diversity between those hypotheses nor do they provably converge to the true Bayesian posterior under any meaningful limit.
In this work, we show how the introduction of a repulsive term between the members in the ensemble, inspired by SVGD, not only na\"ively guarantees the diversity among the members, avoiding their collapse in parameter space, but also allows for a reformulation of the method as a gradient flow of the KL divergence in the Wasserstein space of distributions.
It thus allows to endow deep ensembles with convergence guarantees to the true Bayesian posterior.
An additional problem is that BNN inference in weight space can lead to degenerate solutions, due to the overparametrization of these models.
That is, several samples could have very different weights but map to the same function, thus giving a false sense of diversity in the ensemble. This property, that we will refer to as \emph{non-identifiability} of neural networks (see Appendix~\ref{sec:non_ident_nn}), can lead to redundancies in the posterior distribution.
It implies that methods like MCMC sampling, deep ensembles, and SVGD waste computation in local modes that account for equivalent functions.
Predictive distributions approximated using samples from these modes do not improve over a simple point estimate and lead to a poor uncertainty estimation.
Following this idea, <|cite_start|> (Reference: Function Space Particle Optimization for Bayesian Neural Networks: While Bayesian neural networks (BNNs) have drawn increasing attention, their posterior inference remains challenging, due to the high-dimensional and over-parameterized nature. To address this issue, several highly flexible and scalable variational inference procedures based on the idea of particle optimization have been proposed. These methods directly optimize a set of particles to approximate the target posterior. However, their application to BNNs often yields sub-optimal performance, as such methods have a particular failure mode on over-parameterized models. In this paper, we propose to solve this issue by performing particle optimization directly in the space of regression functions. We demonstrate through extensive experiments that our method successfully overcomes this issue, and outperforms strong baselines in a variety of tasks including prediction, defense against adversarial examples, and reinforcement learning.) <|cite_end|> introduced a new method to extend POVI methods to function space, overcoming this limitation.
Here, we also study an update rule that allows for an approximation of the gradient flow of the KL divergence in function space in our proposed repulsive ensembles.
We make the following contributions:
\begin{itemize}
\item We derive several different repulsion terms that can be added as regularizers to the gradient updates of deep ensembles to endow them with Bayesian convergence properties.
\item We show that these terms approximate Wasserstein gradient flows of the KL divergence and can be used both in weight space and function space.
\item We compare these proposed methods theoretically to standard deep ensembles and SVGD and highlight their different guarantees.
\item We assess all these methods on synthetic and real-world deep learning tasks and show that our proposed repulsive ensembles can achieve competitive performance and improved uncertainty estimation.
\end{itemize}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/regr_Deep_Ensemble.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/regr_w-SVGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/toy_regr_kde-WGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/toy_regr_ssge-WGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/toy_regr_sge-WGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/regr_HMC.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/regr_f_SVGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/toy_regr_kde-fWGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/toy_regr_ssge-fWGD.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\linewidth,trim={1cm 1cm 1cm 0cm},clip]{images/toy_regr_sge_fWGD.pdf}
\end{subfigure}
\caption{\textbf{BNN 1D regression.} The function-space methods (SVGD and WGD) approach the HMC posterior more closely, while the standard deep ensembles and weight-space methods fail to properly account for the uncertainty, especially the in-between uncertainty.}
\label{fig:toy_reg}
\end{figure}
Related Work
The theoretical and empirical properties of SVGD have been well studied <|cite_start|> (Reference: A Non-Asymptotic Analysis for Stein Variational Gradient Descent: We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\mathbb{R}^d$. In the population limit, SVGD performs gradient descent in the space of probability distributions on the KL divergence with respect to $\pi$, where the gradient is smoothed through a kernel integral operator. In this paper, we provide a novel finite time analysis for the SVGD algorithm. We provide a descent lemma establishing that the algorithm decreases the objective at each iteration, and rates of convergence for the average Stein Fisher divergence (also referred to as Kernel Stein Discrepancy). We also provide a convergence result of the finite particle system corresponding to the practical implementation of SVGD to its population version.) <|cite_end|> <|cite_start|> (Reference: Understanding and Accelerating Particle-Based Variational Inference: Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations. We explore ParVIs from the perspective of Wasserstein gradient flows, and make both theoretical and practical contributions. We unify various finite-particle approximations that existing ParVIs use, and recognize that the approximation is essentially a compulsory smoothing treatment, in either of two equivalent forms. This novel understanding reveals the assumptions and relations of existing ParVIs, and also inspires new ParVIs. We propose an acceleration framework and a principled bandwidth-selection method for general ParVIs; these are based on the developed theory and leverage the geometry of the Wasserstein space. Experimental results show the improved convergence by the acceleration framework and enhanced sample accuracy by the bandwidth-selection method.) <|cite_end|> <|cite_start|> (Reference: Annealed Stein Variational Gradient Descent: Particle based optimization algorithms have recently been developed as sampling methods that iteratively update a set of particles to approximate a target distribution. In particular Stein variational gradient descent has gained attention in the approximate inference literature for its flexibility and accuracy. We empirically explore the ability of this method to sample from multi-modal distributions and focus on two important issues: (i) the inability of the particles to escape from local modes and (ii) the inefficacy in reproducing the density of the different regions. We propose an annealing schedule to solve these issues and show, through various experiments, how this simple solution leads to significant improvements in mode coverage, without invalidating any theoretical properties of the original algorithm.) <|cite_end|> and it can also be seen as a Wasserstein gradient flow of the KL divergence in the Stein geometry <|cite_start|> (Reference: On the geometry of Stein variational gradient descent: Bayesian inference problems require sampling or approximating high-dimensional probability distributions. The focus of this paper is on the recently introduced Stein variational gradient descent methodology, a class of algorithms that rely on iterated steepest descent steps with respect to a reproducing kernel Hilbert space norm. This construction leads to interacting particle systems, the mean-field limit of which is a gradient flow on the space of probability distributions equipped with a certain geometrical structure. We leverage this viewpoint to shed some light on the convergence properties of the algorithm, in particular addressing the problem of choosing a suitable positive definite kernel function. Our analysis leads us to considering certain nondifferentiable kernels with adjusted tails. We demonstrate significant performance gains of these in various numerical experiments.) <|cite_end|> <|cite_start|> (Reference: Stein Variational Gradient Descent as gradient flow: Stein variational gradient descent (SVGD) is a deterministic sampling algorithm that iteratively transports a set of particles to approximate given distributions, based on an efficient gradient-based update that guarantees to optimally decrease the KL divergence within a function space. This paper develops the first theoretical analysis on SVGD, discussing its weak convergence properties and showing that its asymptotic behavior is captured by a gradient flow of the KL divergence functional under a new metric structure induced by Stein operator. We also provide a number of results on Stein operator and Stein's identity using the notion of weak derivative, including a new proof of the distinguishability of Stein discrepancy under weak conditions.) <|cite_end|> (see Appendix~\ref{sec:SVGD_flow} for more details). Interestingly, a gradient flow interpretation is also possible for (stochastic gradient) MCMC-type algorithms <|cite_start|> (Reference: Understanding and Accelerating Particle-Based Variational Inference: Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations. We explore ParVIs from the perspective of Wasserstein gradient flows, and make both theoretical and practical contributions. We unify various finite-particle approximations that existing ParVIs use, and recognize that the approximation is essentially a compulsory smoothing treatment, in either of two equivalent forms. This novel understanding reveals the assumptions and relations of existing ParVIs, and also inspires new ParVIs. We propose an acceleration framework and a principled bandwidth-selection method for general ParVIs; these are based on the developed theory and leverage the geometry of the Wasserstein space. Experimental results show the improved convergence by the acceleration framework and enhanced sample accuracy by the bandwidth-selection method.) <|cite_end|>, which can be unified under a general particle inference framework <|cite_start|> (Reference: A Unified Particle-Optimization Framework for Scalable Bayesian Sampling: There has been recent interest in developing scalable Bayesian sampling methods such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) for big-data analysis. A standard SG-MCMC algorithm simulates samples from a discrete-time Markov chain to approximate a target distribution, thus samples could be highly correlated, an undesired property for SG-MCMC. In contrary, SVGD directly optimizes a set of particles to approximate a target distribution, and thus is able to obtain good approximations with relatively much fewer samples. In this paper, we propose a principle particle-optimization framework based on Wasserstein gradient flows to unify SG-MCMC and SVGD, and to allow new algorithms to be developed. Our framework interprets SG-MCMC as particle optimization on the space of probability measures, revealing a strong connection between SG-MCMC and SVGD. The key component of our framework is several particle-approximate techniques to efficiently solve the original partial differential equations on the space of probability measures. Extensive experiments on both synthetic data and deep neural networks demonstrate the effectiveness and efficiency of our framework for scalable Bayesian sampling.) <|cite_end|>.
Moreover, our Wasserstein gradient descent using the SGE approximation can also be derived using an alternative formulation as a gradient flow with smoothed test functions <|cite_start|> (Reference: Understanding and Accelerating Particle-Based Variational Inference: Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations. We explore ParVIs from the perspective of Wasserstein gradient flows, and make both theoretical and practical contributions. We unify various finite-particle approximations that existing ParVIs use, and recognize that the approximation is essentially a compulsory smoothing treatment, in either of two equivalent forms. This novel understanding reveals the assumptions and relations of existing ParVIs, and also inspires new ParVIs. We propose an acceleration framework and a principled bandwidth-selection method for general ParVIs; these are based on the developed theory and leverage the geometry of the Wasserstein space. Experimental results show the improved convergence by the acceleration framework and enhanced sample accuracy by the bandwidth-selection method.) <|cite_end|>.
A projected version of WGD has been studied in <|cite_start|> (Reference: Projected Wasserstein gradient descent for high-dimensional Bayesian inference: We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional Bayesian inference problems. The underlying density function of a particle system of WGD is approximated by kernel density estimation (KDE), which faces the long-standing curse of dimensionality. We overcome this challenge by exploiting the intrinsic low-rank structure in the difference between the posterior and prior distributions. The parameters are projected into a low-dimensional subspace to alleviate the approximation error of KDE in high dimensions. We formulate a projected Wasserstein gradient flow and analyze its convergence property under mild assumptions. Several numerical experiments illustrate the accuracy, convergence, and complexity scalability of pWGD with respect to parameter dimension, sample size, and processor cores.) <|cite_end|>, which could also be readily applied in our framework.
Besides particle methods, Bayesian neural networks <|cite_start|> (Reference: A practical bayesian framework for backpropagation networks: A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.) <|cite_end|> <|cite_start|> (Reference: Bayesian learning for neural networks: ) <|cite_end|> have gained popularity recently <|cite_start|> (Reference: How Good is the Bayes Posterior in Deep Neural Networks Really?: During the past five years the Bayesian deep learning community has developed increasingly accurate and efficient approximate inference procedures that allow for Bayesian inference in deep neural networks. However, despite this algorithmic progress and the promise of improved uncertainty quantification and sample efficiency there are---as of early 2020---no publicized deployments of Bayesian neural networks in industrial practice. In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD. Furthermore, we demonstrate that predictive performance is improved significantly through the use of a "cold posterior" that overcounts evidence. Such cold posteriors sharply deviate from the Bayesian paradigm but are commonly used as heuristic in Bayesian deep learning papers. We put forward several hypotheses that could explain cold posteriors and evaluate the hypotheses through experiments. Our work questions the goal of accurate posterior approximations in Bayesian deep learning: If the true Bayes posterior is poor, what is the use of more accurate approximations? Instead, we argue that it is timely to focus on understanding the origin of the improved performance of cold posteriors.) <|cite_end|> <|cite_start|> (Reference: Bayesian Neural Network Priors Revisited: Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. However, it is unclear whether these priors accurately reflect our true beliefs about the weight distributions or give optimal performance. To find better priors, we study summary statistics of neural network weights in networks trained using stochastic gradient descent (SGD). We find that convolutional neural network (CNN) and ResNet weights display strong spatial correlations, while fully connected networks (FCNNs) display heavy-tailed weight distributions. We show that building these observations into priors can lead to improved performance on a variety of image classification datasets. Surprisingly, these priors mitigate the cold posterior effect in FCNNs, but slightly increase the cold posterior effect in ResNets.) <|cite_end|> <|cite_start|> (Reference: Priors in Bayesian Deep Learning: A Review: While the choice of prior is one of the most critical parts of the Bayesian inference workflow, recent Bayesian deep learning models have often fallen back on vague priors, such as standard Gaussians. In this review, we highlight the importance of prior choices for Bayesian deep learning and present an overview of different priors that have been proposed for (deep) Gaussian processes, variational autoencoders, and Bayesian neural networks. We also outline different methods of learning priors for these models from data. We hope to motivate practitioners in Bayesian deep learning to think more carefully about the prior specification for their models and to provide them with some inspiration in this regard.) <|cite_end|> <|cite_start|> (Reference: What Are Bayesian Neural Network Posteriors Really Like?: The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.) <|cite_end|>, using modern MCMC <|cite_start|> (Reference: Bayesian learning for neural networks: ) <|cite_end|> <|cite_start|> (Reference: How Good is the Bayes Posterior in Deep Neural Networks Really?: During the past five years the Bayesian deep learning community has developed increasingly accurate and efficient approximate inference procedures that allow for Bayesian inference in deep neural networks. However, despite this algorithmic progress and the promise of improved uncertainty quantification and sample efficiency there are---as of early 2020---no publicized deployments of Bayesian neural networks in industrial practice. In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD. Furthermore, we demonstrate that predictive performance is improved significantly through the use of a "cold posterior" that overcounts evidence. Such cold posteriors sharply deviate from the Bayesian paradigm but are commonly used as heuristic in Bayesian deep learning papers. We put forward several hypotheses that could explain cold posteriors and evaluate the hypotheses through experiments. Our work questions the goal of accurate posterior approximations in Bayesian deep learning: If the true Bayes posterior is poor, what is the use of more accurate approximations? Instead, we argue that it is timely to focus on understanding the origin of the improved performance of cold posteriors.) <|cite_end|> <|cite_start|> (Reference: Bayesian Neural Network Priors Revisited: Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. However, it is unclear whether these priors accurately reflect our true beliefs about the weight distributions or give optimal performance. To find better priors, we study summary statistics of neural network weights in networks trained using stochastic gradient descent (SGD). We find that convolutional neural network (CNN) and ResNet weights display strong spatial correlations, while fully connected networks (FCNNs) display heavy-tailed weight distributions. We show that building these observations into priors can lead to improved performance on a variety of image classification datasets. Surprisingly, these priors mitigate the cold posterior effect in FCNNs, but slightly increase the cold posterior effect in ResNets.) <|cite_end|> <|cite_start|> (Reference: Exact Langevin Dynamics with Stochastic Gradients: Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inference, but they are generally biased. We show that many recent versions of these methods (e.g. Chen et al. (2014)) cannot be corrected using Metropolis-Hastings rejection sampling, because their acceptance probability is always zero. We can fix this by employing a sampler with realizable backwards trajectories, such as Gradient-Guided Monte Carlo (Horowitz, 1991), which generalizes stochastic gradient Langevin dynamics (Welling and Teh, 2011) and Hamiltonian Monte Carlo. We show that this sampler can be used with stochastic gradients, yielding nonzero acceptance probabilities, which can be computed even across multiple steps.) <|cite_end|> <|cite_start|> (Reference: BNNpriors: A library for Bayesian neural network inference with different prior distributions: Bayesian neural networks have shown great promise in many applications where calibrated uncertainty estimates are crucial and can often also lead to a higher predictive performance. However, it remains challenging to choose a good prior distribution over their weights. While isotropic Gaussian priors are often chosen in practice due to their simplicity, they do not reflect our true prior beliefs well and can lead to suboptimal performance. Our new library, BNNpriors, enables state-of-the-art Markov Chain Monte Carlo inference on Bayesian neural networks with a wide range of predefined priors, including heavy-tailed ones, hierarchical ones, and mixture priors. Moreover, it follows a modular approach that eases the design and implementation of new custom priors. It has facilitated foundational discoveries on the nature of the cold posterior effect in Bayesian neural networks and will hopefully catalyze future research as well as practical applications in this area.) <|cite_end|> and variational inference techniques <|cite_start|> (Reference: Weight Uncertainty in Neural Networks: We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.) <|cite_end|> <|cite_start|> (Reference: The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks: Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights. Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance. In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization. For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibit strong low-rank structure after convergence. This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance. Furthermore, we find that such factorized parameterizations improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence.) <|cite_end|> <|cite_start|> (Reference: Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors: Bayesian neural networks (BNNs) demonstrate promising success in improving the robustness and uncertainty quantification of modern deep learning. However, they generally struggle with underfitting at scale and parameter efficiency. On the other hand, deep ensembles have emerged as alternatives for uncertainty quantification that, while outperforming BNNs on certain problems, also suffer from efficiency issues. It remains unclear how to combine the strengths of these two approaches and remediate their common issues. To tackle this challenge, we propose a rank-1 parameterization of BNNs, where each weight matrix involves only a distribution on a rank-1 subspace. We also revisit the use of mixture approximate posteriors to capture multiple modes, where unlike typical mixtures, this approach admits a significantly smaller memory increase (e.g., only a 0.4% increase for a ResNet-50 mixture of size 10). We perform a systematic empirical study on the choices of prior, variational posterior, and methods to improve training. For ResNet-50 on ImageNet, Wide ResNet 28-10 on CIFAR-10/100, and an RNN on MIMIC-III, rank-1 BNNs achieve state-of-the-art performance across log-likelihood, accuracy, and calibration on the test sets and out-of-distribution variants.) <|cite_end|> <|cite_start|> (Reference: Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning: Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplace's method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual-tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).) <|cite_end|>. On the other hand, ensemble methods have also been extensively studied <|cite_start|> (Reference: Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles: Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.) <|cite_end|> <|cite_start|> (Reference: Deep Ensembles: A Loss Landscape Perspective: Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable variational Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. Finally, we evaluate the relative effects of ensembling, subspace based methods and ensembles of subspace based methods, and the experimental results validate our hypothesis.) <|cite_end|> <|cite_start|> (Reference: Bayesian Deep Learning and a Probabilistic Perspective of Generalization: The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights. Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically underspecified by the data, and can represent many compelling but different solutions. We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead. We also investigate the prior over functions implied by a vague distribution over neural network weights, explaining the generalization properties of such models from a probabilistic perspective. From this perspective, we explain results that have been presented as mysterious and distinct to neural network generalization, such as the ability to fit images with random labels, and show that these results can be reproduced with Gaussian processes. We also show that Bayesian model averaging alleviates double descent, resulting in monotonic performance improvements with increased flexibility. Finally, we provide a Bayesian perspective on tempering for calibrating predictive distributions.) <|cite_end|> <|cite_start|> (Reference: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs: The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.) <|cite_end|> <|cite_start|> (Reference: Hyperparameter Ensembles for Robustness and Uncertainty Quantification: Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.) <|cite_end|> <|cite_start|> (Reference: Snapshot Ensembles: Train 1, get M for free: Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.) <|cite_end|> <|cite_start|> (Reference: Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning: The posteriors over neural network weights are high dimensional and multimodal. Each mode typically characterizes a meaningfully different representation of the data. We develop Cyclical Stochastic Gradient MCMC (SG-MCMC) to automatically explore such distributions. In particular, we propose a cyclical stepsize schedule, where larger steps discover new modes, and smaller steps characterize each mode. We also prove non-asymptotic convergence of our proposed algorithm. Moreover, we provide extensive experimental results, including ImageNet, to demonstrate the scalability and effectiveness of cyclical SG-MCMC in learning complex multimodal distributions, especially for fully Bayesian inference with modern deep neural networks.) <|cite_end|> <|cite_start|> (Reference: BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning: Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensemble's cost for both training and testing increases linearly with the number of networks, which quickly becomes untenable. In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles. BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member. Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch. Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and out-of-distribution tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4. We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs. We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks.) <|cite_end|>.Moreover, repulsive interactions between the members have also been studied in <|cite_start|> (Reference: Handling Black Swan Events in Deep Learning with Diversely Extrapolated Neural Networks.: By virtue of their expressive power, neural networks (NNs) are well suited to fitting large, complex datasets, yet they are also known to
produce similar predictions for points outside the training distribution.
As such, they are, like humans, under the influence of the Black Swan theory: models tend to be extremely "surprised" by rare events, leading to potentially disastrous consequences, while justifying these same events in hindsight.
To avoid this pitfall, we introduce DENN, an ensemble approach building a set of Diversely Extrapolated Neural Networks that fits the training data and is able to generalize more diversely when extrapolating to novel data points.
This leads DENN to output highly uncertain predictions for unexpected inputs.
We achieve this by adding a diversity term in the loss function used to train the model, computed at specific inputs.
We first illustrate the usefulness of the method on a low-dimensional regression problem.
Then, we show how the loss can be adapted to tackle anomaly detection during classification, as well as safe imitation learning problems.) <|cite_end|>.
Moreover, providing Bayesian interpretations for deep ensembles has been previously attempted through the lenses of stationary SGD distributions <|cite_start|> (Reference: Stochastic Gradient Descent as Approximate Bayesian Inference: Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results. (1) We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions. (2) We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models. (3) We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly. (4) We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally (5), we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.) <|cite_end|> <|cite_start|> (Reference: Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks: Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such "out-of-equilibrium" behavior is a consequence of highly non-isotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims, proven in the appendix.) <|cite_end|>, ensembles of linear models <|cite_start|> (Reference: Sample-then-optimize posterior sampling for Bayesian linear models: In modern machine learning it is common to train models which have an extremely high intrinsic capacity. The results obtained are often initialization dependent, are different for disparate optimizers and in some cases have no explicit regularization. This raises difficult questions about generalization [1]. A natural approach to questions of generalization is a Bayesian one. There is therefore a growing literature attempting to understand how Bayesian posterior inference could emerge from the complexity of modern practice [2, 3], even without having such a procedure as the stated goal.) <|cite_end|>, additional random functions <|cite_start|> (Reference: Deep Exploration via Randomized Value Functions.: We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.) <|cite_end|> <|cite_start|> (Reference: Conservative uncertainty estimation by fitting prior networks: Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.) <|cite_end|> <|cite_start|> (Reference: Bayesian Deep Ensembles via the Neural Tangent Kernel: We explore the link between deep ensembles and Gaussian processes (GPs) through the lens of the Neural Tangent Kernel (NTK): a recent development in understanding the training dynamics of wide neural networks (NNs). Previous work has shown that even in the infinite width limit, when NNs become GPs, there is no GP posterior interpretation to a deep ensemble trained with squared error loss. We introduce a simple modification to standard deep ensembles training, through addition of a computationally-tractable, randomised and untrainable function to each ensemble member, that enables a posterior interpretation in the infinite width limit. When ensembled together, our trained NNs give an approximation to a posterior predictive distribution, and we prove that our Bayesian deep ensembles make more conservative predictions than standard deep ensembles in the infinite width limit. Finally, using finite width NNs we demonstrate that our Bayesian deep ensembles faithfully emulate the analytic posterior predictive when available, and can outperform standard deep ensembles in various out-of-distribution settings, for both regression and classification tasks.) <|cite_end|>, approximate inference <|cite_start|> (Reference: Bayesian Deep Learning and a Probabilistic Perspective of Generalization: The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights. Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically underspecified by the data, and can represent many compelling but different solutions. We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead. We also investigate the prior over functions implied by a vague distribution over neural network weights, explaining the generalization properties of such models from a probabilistic perspective. From this perspective, we explain results that have been presented as mysterious and distinct to neural network generalization, such as the ability to fit images with random labels, and show that these results can be reproduced with Gaussian processes. We also show that Bayesian model averaging alleviates double descent, resulting in monotonic performance improvements with increased flexibility. Finally, we provide a Bayesian perspective on tempering for calibrating predictive distributions.) <|cite_end|>, Stein variational inference <|cite_start|> (Reference: On Stein Variational Neural Network Ensembles: Ensembles of deep neural networks have achieved great success recently, but they do not offer a proper Bayesian justification. Moreover, while they allow for averaging of predictions over several hypotheses, they do not provide any guarantees for their diversity, leading to redundant solutions in function space. In contrast, particle-based inference methods, such as Stein variational gradient descent (SVGD), offer a Bayesian framework, but rely on the choice of a kernel to measure the similarity between ensemble members. In this work, we study different SVGD methods operating in the weight space, function space, and in a hybrid setting. We compare the SVGD approaches to other ensembling-based methods in terms of their theoretical properties and assess their empirical performance on synthetic and real-world tasks. We find that SVGD using functional and hybrid kernels can overcome the limitations of deep ensembles. It improves on functional diversity and uncertainty estimation and approaches the true Bayesian posterior more closely. Moreover, we show that using stochastic SVGD updates, as opposed to the standard deterministic ones, can further improve the performance.) <|cite_end|>, and marginal likelihood lower bounds <|cite_start|> (Reference: A Bayesian Perspective on Training Speed and Model Selection: We take a Bayesian perspective to illustrate a connection between training speed and the marginal likelihood in linear models. This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood. Second, that this measure, under certain conditions, predicts the relative weighting of models in linear model combinations trained to minimize a regression loss. We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks. We further provide encouraging empirical evidence that the intuition developed in these settings also holds for deep neural networks trained with stochastic gradient descent. Our results suggest a promising new direction towards explaining why neural networks trained with stochastic gradient descent are biased towards functions that generalize well.) <|cite_end|>, and ensembles have also been shown to provide good approximations to the true BNN posterior in some settings <|cite_start|> (Reference: What Are Bayesian Neural Network Posteriors Really Like?: The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex. For computational reasons, researchers approximate this posterior using inexpensive mini-batch methods such as mean-field variational inference or stochastic-gradient Markov chain Monte Carlo (SGMCMC). To investigate foundational questions in Bayesian deep learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern architectures. We show that (1) BNNs can achieve significant performance gains over standard training and deep ensembles; (2) a single long HMC chain can provide a comparable representation of the posterior to multiple shorter chains; (3) in contrast to recent studies, we find posterior tempering is not needed for near-optimal performance, with little evidence for a "cold posterior" effect, which we show is largely an artifact of data augmentation; (4) BMA performance is robust to the choice of prior scale, and relatively similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5) Bayesian neural networks show surprisingly poor generalization under domain shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods can provide good generalization, they provide distinct predictive distributions from HMC. Notably, deep ensemble predictive distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.) <|cite_end|>.
Furthermore, variational inference in function space has recently gained attention <|cite_start|> (Reference: Functional Variational Bayesian Neural Networks: Variational Bayesian neural networks (BNNs) perform variational inference over weights, but it is difficult to specify meaningful priors and approximate posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes equals the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors entailing rich structures, including Gaussian processes and implicit stochastic processes. Empirically, we find fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and scale to large datasets.) <|cite_end|> and the limitations of the KL divergence have been studied in <|cite_start|> (Reference: Understanding Variational Inference in Function-Space: Recent work has attempted to directly approximate the `function-space' or predictive posterior distribution of Bayesian models, without approximating the posterior distribution over the parameters. This is appealing in e.g. Bayesian neural networks, where we only need the former, and the latter is hard to represent. In this work, we highlight some advantages and limitations of employing the Kullback-Leibler divergence in this setting. For example, we show that minimizing the KL divergence between a wide class of parametric distributions and the posterior induced by a (non-degenerate) Gaussian process prior leads to an ill-defined objective function. Then, we propose (featurized) Bayesian linear regression as a benchmark for `function-space' inference methods that directly measures approximation quality. We apply this methodology to assess aspects of the objective function and inference scheme considered in Sun, Zhang, Shi, and Grosse (2018), emphasizing the quality of approximation to Bayesian inference as opposed to predictive performance.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> {The variational formulation of the Fokker--Planck equation: The Fokker--Planck equation, or forward Kolmogorov equation, describes the evolution of the probability density for a stochastic process associated with an Ito stochastic differential equation. It ... <|reference_end|>",
"<|reference_start|> Annealed Stein Variational Gradient Descent: Particle based optimization algorithms have recently been developed as sampling methods that iteratively update a set of particles to approximate a target distribution. In particular Stein variational gradient descent has gained attention in the approximate inference literature for its flexibility and accuracy. We empirically explore the ability of this method to sample from multi-modal distributions and focus on two important issues: (i) the inability of the particles to escape from local modes and (ii) the inefficacy in reproducing the density of the different regions. We propose an annealing schedule to solve these issues and show, through various experiments, how this simple solution leads to significant improvements in mode coverage, without invalidating any theoretical properties of the original algorithm. <|reference_end|>",
"<|reference_start|> Weight Uncertainty in Neural Networks: We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning. <|reference_end|>",
"<|reference_start|> Deep Exploration via Randomized Value Functions.: We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation. <|reference_end|>"
] | [
0,
10,
28,
44
] | {"<|cite_22|>": "ss-915868", "<|cite_23|>": "arxiv-104006", "<|multi_cite_5_1|>": "arxiv-111855", "<|multi_cite_5_2|>": "arxiv-274147", "<|cite_6|>": "arxiv-208335", "<|multi_cite_7_1|>": "arxiv-249435", "<|multi_cite_7_2|>": "arxiv-337742", "<|cite_24|>": "arxiv-192933", "<|multi_cite_1_1|>": "arxiv-272553", "<|multi_cite_1_2|>": "arxiv-164802", "<|multi_cite_1_3|>": "arxiv-316781", "<|multi_cite_8_1|>": "arxiv-237342", "<|multi_cite_8_2|>": "ss-874657", "<|cite_2|>": "arxiv-164802", "<|cite_3|>": "arxiv-160502", "<|cite_9|>": "arxiv-164802", "<|cite_25|>": "arxiv-320679", "<|multi_cite_10_1|>": "ss-1109000", "<|multi_cite_10_2|>": "ss-933277", "<|multi_cite_11_1|>": "arxiv-246765", "<|multi_cite_11_2|>": "arxiv-320790", "<|multi_cite_11_3|>": "arxiv-340984", "<|multi_cite_11_4|>": "arxiv-337742", "<|multi_cite_12_1|>": "ss-933277", "<|multi_cite_12_2|>": "arxiv-246765", "<|multi_cite_12_3|>": "arxiv-320790", "<|multi_cite_12_4|>": "arxiv-318697", "<|multi_cite_12_5|>": "arxiv-341017", "<|multi_cite_13_1|>": "arxiv-78051", "<|multi_cite_13_2|>": "arxiv-246865", "<|multi_cite_13_3|>": "arxiv-265538", "<|multi_cite_13_4|>": "arxiv-333578", "<|multi_cite_4_1|>": "arxiv-111855", "<|multi_cite_4_2|>": "arxiv-237995", "<|multi_cite_4_3|>": "arxiv-249435", "<|multi_cite_4_4|>": "arxiv-149873", "<|multi_cite_4_5|>": "arxiv-274147", "<|multi_cite_4_6|>": "arxiv-120584", "<|multi_cite_4_7|>": "arxiv-191090", "<|multi_cite_4_8|>": "arxiv-248579", "<|cite_26|>": "ss-2030252", "<|multi_cite_14_1|>": "arxiv-121657", "<|multi_cite_14_2|>": "arxiv-138608", "<|cite_15|>": "ss-1391301", "<|multi_cite_16_1|>": "ss-1527503", "<|multi_cite_16_2|>": "ss-752352", "<|multi_cite_16_3|>": "arxiv-277847", "<|cite_17|>": "arxiv-249435", "<|cite_18|>": "arxiv-349730", "<|cite_19|>": "arxiv-299668", "<|cite_20|>": "arxiv-337742", "<|cite_21|>": "arxiv-195251", "<|cite_27|>": "arxiv-304460"} |
2405.02121 | <|paper_start|> Title: Accurate Pose Prediction on Signed Distance Fields for Mobile Ground Robots in Rough Terrain
Abstract: Accurate Pose Prediction on Signed Distance Fields for Mobile Ground Robots in Rough Terrain: Autonomous locomotion for mobile ground robots in unstructured environments such as waypoint navigation or flipper control requires a sufficiently accurate prediction of the robot-terrain interaction. Heuristics like occupancy grids or traversability maps are widely used but limit actions available to robots with active flippers as joint positions are not taken into account. We present a novel iterative geometric method to predict the 3D pose of mobile ground robots with active flippers on uneven ground with high accuracy and online planning capabilities. This is achieved by utilizing the ability of signed distance fields to represent surfaces with sub-voxel accuracy. The effectiveness of the presented approach is demonstrated on two different tracked robots in simulation and on a real platform. Compared to a tracking system as ground truth, our method predicts the robot position and orientation with an average accuracy of 3.11 cm and 3.91{\deg}, outperforming a recent heightmap-based approach. The implementation is made available as an open-source ROS package.
Introduction
Mobile ground robots can support humans in a wide range of applications. In disaster response <|cite_start|> (Reference: German rescue robotics center (drz): A holistic approach for robotic systems assisting in emergency response: To meet the challenges involved in providing adequate robotic support to first responders, a holistic approach is needed. This requires close cooperation of first responders, researchers and companies for scenario-based needs analysis, iterative development of the corresponding system functionality and integrated robotic systems as well as human-robot teamwork support, and experimentation, system testing and evaluation in realistic missions carried out with or by first responders. We describe how such a holistic approach is implemented by the partners in the cooperative project A-DRZ for the establishment of the German Rescue Robotics Center (DRZ). The A-DRZ approach addresses important requirements identified by first responders: adaptation of operational capabilities of robotic platforms; robust network connectivity; autonomous assistance functions facilitating robot control; improving situation awareness for strategic and tactical mission planning; integration of human-robot teams in the first responders' mission command structure. Solutions resulting from these efforts are tested and evaluated in excercises utilizing the advanced capabilities at the DRZ Living Lab and in external deployments.) <|cite_end|> and planetary exploration <|cite_start|> (Reference: Fast Approximate Clearance Evaluation for Rovers with Articulated Suspension Systems: We present a light-weight body-terrain clearance evaluation algorithm for the automated path planning of NASA's Mars 2020 rover. Extraterrestrial path planning is challenging due to the combination of terrain roughness and severe limitation in computational resources. Path planning on cluttered and/or uneven terrains requires repeated safety checks on all the candidate paths at a small interval. Predicting the future rover state requires simulating the vehicle settling on the terrain, which involves an inverse-kinematics problem with iterative nonlinear optimization under geometric constraints. However, such expensive computation is intractable for slow spacecraft computers, such as RAD750, which is used by the Curiosity Mars rover and upcoming Mars 2020 rover. We propose the Approximate Clearance Evaluation (ACE) algorithm, which obtains conservative bounds on vehicle clearance, attitude, and suspension angles without iterative computation. It obtains those bounds by estimating the lowest and highest heights that each wheel may reach given the underlying terrain, and calculating the worst-case vehicle configuration associated with those extreme wheel heights. The bounds are guaranteed to be conservative, hence ensuring vehicle safety during autonomous navigation. ACE is planned to be used as part of the new onboard path planner of the Mars 2020 rover. This paper describes the algorithm in detail and validates our claim of conservatism and fast computation through experiments.) <|cite_end|> they enter hazardous areas and provide a remote presence in environments dangerous to humans. In an industrial application such as construction site monitoring <|cite_start|> (Reference: 3D Coverage Path Planning for Efficient Construction Progress Monitoring: On construction sites, progress must be monitored continuously to ensure that the current state corresponds to the planned state in order to increase efficiency, safety and detect construction defects at an early stage. Autonomous mobile robots can document the state of construction with high data quality and consistency. However, finding a path that fully covers the construction site is a challenging task as it can be large, slowly changing over time, and contain dynamic objects. Existing approaches are either exploration approaches that require a long time to explore the entire building, object scanning approaches that are not suitable for large and complex buildings, or planning approaches that only consider 2D coverage. In this paper, we present a novel approach for planning an efficient 3D path for progress monitoring on large construction sites with multiple levels. By making use of an existing 3D model we ensure that all surfaces of the building are covered by the sensor payload such as a 360-degree camera or a lidar. This enables the consistent and reliable monitoring of construction site progress with an autonomous ground robot. We demonstrate the effectiveness of the proposed planner on an artificial and a real building model, showing that much shorter paths and better coverage are achieved than with a traditional exploration planner.) <|cite_end|>, mobile robots automate tedious and repetitive tasks. These environments have in common that they are unstructured and require the traversal of challenging uneven terrain.
Tracked robots are well suited to these environments because of their ability to negotiate rough terrain. Many platforms can additionally reconfigure their kinematics to further improve their capability of overcoming obstacles, e.g., by changing the shape of the tracks with active flippers or by shifting the \acl{com} with a heavy manipulator arm.
However, the additional degrees of freedom make teleoperation of the robot more challenging, therefore, increasing the mental load of the operator and the risk of errors.
Autonomous locomotion capabilities such as path planning <|cite_start|> (Reference: Planning Stable and Efficient Paths for Reconfigurable Robots On Uneven Terrain: ) <|cite_end|> or whole-body planning <|cite_start|> (Reference: Optimization-based planning for autonomous traversal of obstacles with mobile ground robots: Mobile robotic platforms which are traversing unstructured environments with challenging uneven terrain are permanently endangered of falling over. Previous research on trajectory planning methods for the prevention of vehicle tip-over is mostly limited to basic mobility systems with only few degrees of freedom (DOF). This paper proposes a novel optimization-based planning approach that enables mobile robots to autonomously traverse obstacles and rough terrain more safely. A 3D world model as provided from external sensors like Lidar is used to compute a whole-body motion plan in advance by optimizing the trajectories of each joint. Active flipper tracks maximize ground contact for improved traction and, if available, manipulator arm joints are used to further improve stability metrics. Additional constraints prevent collisions with the environment and the robot itself. The presented approach makes only few assumptions about the robot’s configuration and is applicable to a wide range of wheeled or tracked platforms. This is demonstrated by experimental evaluation for two different robots in simulation and for one physical robot. In four different test scenarios it is shown, that the proposed approach effectively prevents vehicle tip-over during traversal of uneven ground.) <|cite_end|> can alleviate the operator of stress but as part of the planning process, a prediction about the robot-terrain interaction is required.
A common approach is the approximation of this interaction via heuristics such as an occupancy grid <|cite_start|> (Reference: Using Occupancy Grids for Mobile Robot Perception and navigation: An approach to robot perception and world modeling that uses a probabilistic tesselated representation of spatial information called the occupancy grid is reviewed. The occupancy grid is a multidimensional random field that maintains stochastic estimates of the occupancy state of the cells in a spatial lattice. To construct a sensor-derived map of the robot's world, the cell state estimates are obtained by interpreting the incoming range readings using probabilistic sensor models. Bayesian estimation procedures allow the incremental updating of the occupancy grid, using readings taken from several sensors over multiple points of view. The use of occupancy grids from mapping and for navigation is examined. Operations on occupancy grids and extensions of the occupancy grid framework are briefly considered.<<ETX>>) <|cite_end|> or traversability map <|cite_start|> (Reference: Navigation planning for legged robots in challenging terrain: This paper presents a framework for planning safe and efficient paths for a legged robot in rough and unstructured terrain. The proposed approach allows to exploit the distinctive obstacle negotiation capabilities of legged robots, while keeping the complexity low enough to enable planning over considerable distances in short time. We compute typical terrain characteristics such as slope, roughness, and steps to build a traversability map. This map is used to assess the costs of individual robot footprints as a function of the robot-specific obstacle negotiating capabilities for steps, gaps and stairs. Our sampling-based planner employs the RRT* algorithm to optimize path length and safety. The planning framework has a hierarchical architecture to frequently replan the path during execution as new terrain is perceived with onboard sensors. Furthermore, a cascaded planning structure makes use of different levels of simplification to allow for fast search in simple environments, while retaining the ability to find complex solutions, such as paths through narrow passages. The proposed navigation planning framework is integrated on the quadrupedal robot StarlETH and extensively tested in simulation as well as on the real platform.) <|cite_end|>. While this works well in semi-structured terrain, the approximation has to be chosen conservatively to prevent accidents and therefore limits the actions available to the robot significantly. Moreover, heuristics are not optimal for articulated robots because they do not take the current joint positions into account.
Pose prediction approaches, also known as robot settling, can provide a more accurate model of the robot-terrain interaction by estimating the 3D robot position and orientation on the ground.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/front_page}
\caption{Based on an SE(2) pose, the joint configuration and an \ac{esdf} of the environment (top-left), the 3D pose and terrain interaction are predicted (bottom-left). The photo on the right shows the robot Asterix on the same terrain for comparison.}
\label{fig:approach_front_page}
\end{figure}
We propose a novel pose prediction method for mobile ground robots based on \acfp{esdf} <|cite_start|> (Reference: Voxblox: Incremental 3D Euclidean Signed Distance Fields for On-Board MAV Planning: Micro Aerial Vehicles (MAVs) that operate in unstructured, unexplored environments require fast and flexible local planning, which can replan when new parts of the map are explored. Trajectory optimization methods fulfill these needs, but require obstacle distance information, which can be given by Euclidean Signed Distance Fields (ESDFs). We propose a method to incrementally build ESDFs from Truncated Signed Distance Fields (TSDFs), a common implicit surface representation used in computer graphics and vision. TSDFs are fast to build and smooth out sensor noise over many observations, and are designed to produce surface meshes. Meshes allow human operators to get a better assessment of the robot's environment, and set high-level mission goals. We show that we can build TSDFs faster than Octomaps, and that it is more accurate to build ESDFs out of TSDFs than occupancy maps. Our complete system, called voxblox, will be available as open source and runs in real-time on a single CPU core. We validate our approach on-board an MAV, by using our system with a trajectory optimization local planner, entirely on-board and in real-time.) <|cite_end|> that takes the joint configuration into account (see Fig~\ref{fig:approach_front_page}).
Moreover, the 3D representation allows for the support of multi-level environments. The implementation is available as open-source for the robot framework ROS\footnote{\url{https://github.com/tu-darmstadt-ros-pkg/sdf_contact_estimation}}.
We demonstrate that our approach generalizes to different ground robots by evaluating on two platforms in simulation. Additionally, we perform experiments on the real robot using a tracking system for ground truth localization data and show our approach to be fast enough for online planning. The evaluation scenario is based on the RoboCup \ac{rrl} competition\footnote{\url{https://rrl.robocup.org/}} arenas that are designed to simulate unstructured environments found in real rescue missions. The dataset and code are publicly available to facilitate the reproduction of our results.
Related Work
Various methods have been developed to estimate the robot-terrain interaction for planning algorithms. An over\-view and classification of prior work on traversability estimation methods is given in <|cite_start|> (Reference: Terrain traversability analysis methods for unmanned ground vehicles: A survey: ) <|cite_end|>.
A common approach to approximate the robot-terrain interaction is using heuristics.
In <|cite_start|> (Reference: Using Occupancy Grids for Mobile Robot Perception and navigation: An approach to robot perception and world modeling that uses a probabilistic tesselated representation of spatial information called the occupancy grid is reviewed. The occupancy grid is a multidimensional random field that maintains stochastic estimates of the occupancy state of the cells in a spatial lattice. To construct a sensor-derived map of the robot's world, the cell state estimates are obtained by interpreting the incoming range readings using probabilistic sensor models. Bayesian estimation procedures allow the incremental updating of the occupancy grid, using readings taken from several sensors over multiple points of view. The use of occupancy grids from mapping and for navigation is examined. Operations on occupancy grids and extensions of the occupancy grid framework are briefly considered.<<ETX>>) <|cite_end|>, an occupancy grid is proposed for robot navigation. The 2D grid representation stores the probability of being occupied in each cell and therefore simplifies the problem to a binary decision whether a surface is traversable or not.
This limitation is addressed by the authors of <|cite_start|> (Reference: Navigation planning for legged robots in challenging terrain: This paper presents a framework for planning safe and efficient paths for a legged robot in rough and unstructured terrain. The proposed approach allows to exploit the distinctive obstacle negotiation capabilities of legged robots, while keeping the complexity low enough to enable planning over considerable distances in short time. We compute typical terrain characteristics such as slope, roughness, and steps to build a traversability map. This map is used to assess the costs of individual robot footprints as a function of the robot-specific obstacle negotiating capabilities for steps, gaps and stairs. Our sampling-based planner employs the RRT* algorithm to optimize path length and safety. The planning framework has a hierarchical architecture to frequently replan the path during execution as new terrain is perceived with onboard sensors. Furthermore, a cascaded planning structure makes use of different levels of simplification to allow for fast search in simple environments, while retaining the ability to find complex solutions, such as paths through narrow passages. The proposed navigation planning framework is integrated on the quadrupedal robot StarlETH and extensively tested in simulation as well as on the real platform.) <|cite_end|>. They propose the traversability map which introduces a continuous traversability value based on the local ground geometry, encoding factors such as roughness, step height, inclined slopes, or gaps.
While being fast to compute, heuristics only give a conservative estimate of the robot-terrain interaction. They can not represent situations in which a robot may traverse the terrain in one orientation but not in another and do not take the joint configuration into account. Pose prediction approaches address these shortcomings by virtually settling the robot on the ground using a map of the environment. Following <|cite_start|> (Reference: Design and comparative evaluation of an iterative contact point estimation method for static stability estimation of mobile actively reconfigurable robots: ) <|cite_end|>, we categorize prior work into physics simulation, geometric methods, and learning approaches.
Physics simulations compute forces and accelerations over time to drop the robot under the influence of gravity onto the surface. In <|cite_start|> (Reference: Planning Stable and Efficient Paths for Reconfigurable Robots On Uneven Terrain: ) <|cite_end|>, the physics engine \ac{ode} has been used to simulate poses for a combined path and joint configuration planning. While no timings have been given by the authors, a general physics engine is slow as also shown by <|cite_start|> (Reference: Design and comparative evaluation of an iterative contact point estimation method for static stability estimation of mobile actively reconfigurable robots: ) <|cite_end|>. The authors of <|cite_start|> (Reference: Pose estimation-based path planning for a tracked mobile robot traversing uneven terrains: ) <|cite_end|> propose a specialized approach that formulates the contact problem as a linear complementary problem (LCP) which is solved using Lemke's method. A similar approach is taken by <|cite_start|> (Reference: Pose Estimation of Vehicles Over Uneven Terrain: This paper presents a method for pose estimation of off-road vehicles moving over uneven terrain. It determines the contact points between the wheels and the terrain, assuming rigid contacts between an arbitrary number of wheels and ground. The terrain is represented by a 3D points cloud, interpolated by a B-patch to provide a continuous terrain representation. The pose estimation problem is formulated as a rigid body contact problem for a given location of the vehicle's center of mass over the terrain and a given yaw angle. The contact points between the wheels and ground are determined by releasing the vehicle from a given point above the terrain, until the contact forces between the wheels and ground, and the gravitational force, reach equilibrium. The contact forces are calculated using singular value decomposition (SVD) of the deduced contact matrix. The proposed method is computationally efficient, allowing real time computation during motion, as demonstrated in several examples. Accurate pose estimations can be used for motion planning, stability analyses and traversability analyses over uneven terrain.) <|cite_end|>. They interpolate the terrain with a cubic B-patch and simulate the robot until gravitational force and contact forces reach a static equilibrium. The contact forces are computed using a singular value decomposition (SVD) with a claimed 10-times speed up over the previous LCP approach.
Geometric methods estimate the robot-terrain interaction directly without simulating time steps. Hence, they are typically faster. In <|cite_start|> (Reference: Autonomous Off-road Navigation over Extreme Terrains with Perceptually-challenging Conditions: We propose a framework for resilient autonomous navigation in perceptually challenging unknown environments with mobility-stressing elements such as uneven surfaces with rocks and boulders, steep slopes, negative obstacles like cliffs and holes, and narrow passages. Environments are GPS-denied and perceptually-degraded with variable lighting from dark to lit and obscurants (dust, fog, smoke). Lack of prior maps and degraded communication eliminates the possibility of prior or off-board computation or operator intervention. This necessitates real-time on-board computation using noisy sensor data. To address these challenges, we propose a resilient architecture that exploits redundancy and heterogeneity in sensing modalities. Further resilience is achieved by triggering recovery behaviors upon failure. We propose a fast settling algorithm to generate robust multi-fidelity traversability estimates in real-time. The proposed approach was deployed on multiple physical systems including skid-steer and tracked robots, a high-speed RC car and legged robots, as a part of Team CoSTAR's effort to the DARPA Subterranean Challenge, where the team won 2nd and 1st place in the Tunnel and Urban Circuits, respectively.) <|cite_end|>, the robot is settled on ground points at a given query pose by fitting a plane on the cloud below the robot footprint. Several traversability metrics are computed from the settled pose and surface cloud. The authors of <|cite_start|> (Reference: Fast Approximate Clearance Evaluation for Rovers with Articulated Suspension Systems: We present a light-weight body-terrain clearance evaluation algorithm for the automated path planning of NASA's Mars 2020 rover. Extraterrestrial path planning is challenging due to the combination of terrain roughness and severe limitation in computational resources. Path planning on cluttered and/or uneven terrains requires repeated safety checks on all the candidate paths at a small interval. Predicting the future rover state requires simulating the vehicle settling on the terrain, which involves an inverse-kinematics problem with iterative nonlinear optimization under geometric constraints. However, such expensive computation is intractable for slow spacecraft computers, such as RAD750, which is used by the Curiosity Mars rover and upcoming Mars 2020 rover. We propose the Approximate Clearance Evaluation (ACE) algorithm, which obtains conservative bounds on vehicle clearance, attitude, and suspension angles without iterative computation. It obtains those bounds by estimating the lowest and highest heights that each wheel may reach given the underlying terrain, and calculating the worst-case vehicle configuration associated with those extreme wheel heights. The bounds are guaranteed to be conservative, hence ensuring vehicle safety during autonomous navigation. ACE is planned to be used as part of the new onboard path planner of the Mars 2020 rover. This paper describes the algorithm in detail and validates our claim of conservatism and fast computation through experiments.) <|cite_end|> propose a fast approximate settling for a Mars rover with articulated suspension. They infer the worst-case vehicle configuration from upper and lower bounds for each wheel height and assign metrics such as ground clearance. A more general approach is taken by <|cite_start|> (Reference: Design and comparative evaluation of an iterative contact point estimation method for static stability estimation of mobile actively reconfigurable robots: ) <|cite_end|>. They propose a fast iterative method that finds the robot pose and contact points based on a geometric model of the robot and a least-squares approximation of the ground. The quality of the estimation depends on how well the terrain below the robot is approximated by a plane.
The authors of <|cite_start|> (Reference: Pose prediction for mobile ground robots in uneven terrain based on difference of heightmaps: For traversing uneven terrain in degraded environments, determining the static stability and consequently the tip-over risk of a mobile ground rescue robot is fundamental for planning and evaluation of paths. This paper presents a novel iterative geometric method that reduces the problem of robot pose prediction to two-dimensional image-processing operations by introducing the concept of a robot heightmap. The presented method requires only geometrical and mass information extracted from the widely used unified robot description format (URDF) to compute the robot heightmap, which makes it transferable to a wide range of mobile robot platforms without modification. We demonstrate that the approach accurately predicts the real robot's 6D pose at the input x-y-coordinates. Runtimes allowing the evaluation of poses in the order of ten thousand poses per second show that the method is computationally efficient enough to be used in online path planning.) <|cite_end|> avoid this issue by settling the robot directly on a heightmap. By introducing the concept of robot heightmaps, they reduce the pose prediction problem to a sequence of image operations for a fast and accurate computation of poses and contacts with a similar speed to <|cite_start|> (Reference: Design and comparative evaluation of an iterative contact point estimation method for static stability estimation of mobile actively reconfigurable robots: ) <|cite_end|>. However, using heightmaps as the model for robot and environment comes with the drawbacks of a coarse representation of vertical edges and robot features due to the cell resolution and the limitation to planning problems on a single level in the environment. <|cite_start|> (Reference: Pose consistency kkt-loss for weakly supervised learning of robot-terrain interaction model: We address the problem of self-supervised learning for predicting the shape of supporting terrain (i.e. the terrain which will provide rigid support for the robot during its traversal) from sparse input measurements. The learning method exploits two types of ground-truth labels: dense 2.5D maps and robot poses, both estimated by a usual SLAM procedure from offline recorded measurements. We show that robot poses are required because straightforward supervised learning from the 3D maps only suffers from: (i) exaggerated height of the supporting terrain caused by terrain flexibility (vegetation, shallow water, snow or sand) and (ii) missing or noisy measurements caused by high spectral absorbance or non-Lambertian reflectance of the measured surface. We address the learning from robot poses by introducing a novel KKT-loss, which emerges as the distance from necessary Karush-Kuhn-Tucker conditions for constrained local optima of a simplified first-principle model of the robot-terrain interaction. We experimentally verify that the proposed weakly supervised learning from ground-truth robot poses boosts the accuracy of predicted support heightmaps and increases the accuracy of estimated robot poses. All experiments are conducted on a dataset captured by a real platform. Both the dataset and codes which replicates experiments in the paper are made publicly available as a part of the submission.) <|cite_end|> proposes a learning-based approach that uses self-supervised learning to predict the shape of the supporting terrain. They define the pose estimation problem as the minimization of potential energy and train a pose regressor network that predicts the robot pose given the heightmap.
In the following section, we present a novel iterative geometric approach to predict the robot-terrain interaction. The method achieves fast computation times with consistent results because it is not limited to time steps as would be the case for physics simulation approaches.
By directly modeling the kinematics of the robot in a robot-agnostic manner, the pose prediction takes the current joint configuration into account and generalizes to other robot platforms.
We achieve a high prediction accuracy by modeling the environment with the voxel-based \ac{esdf}. In this representation, each cell encodes the Euclidean distance to the closest surface, therefore enabling the localization of surfaces with sub-voxel accuracy. Moreover, the 3D representation allows for the application in multi-level environments. <|paper_end|> | [
"<|reference_start|> Fast Approximate Clearance Evaluation for Rovers with Articulated Suspension Systems: We present a light-weight body-terrain clearance evaluation algorithm for the automated path planning of NASA's Mars 2020 rover. Extraterrestrial path planning is challenging due to the combination of terrain roughness and severe limitation in computational resources. Path planning on cluttered and/or uneven terrains requires repeated safety checks on all the candidate paths at a small interval. Predicting the future rover state requires simulating the vehicle settling on the terrain, which involves an inverse-kinematics problem with iterative nonlinear optimization under geometric constraints. However, such expensive computation is intractable for slow spacecraft computers, such as RAD750, which is used by the Curiosity Mars rover and upcoming Mars 2020 rover. We propose the Approximate Clearance Evaluation (ACE) algorithm, which obtains conservative bounds on vehicle clearance, attitude, and suspension angles without iterative computation. It obtains those bounds by estimating the lowest and highest heights that each wheel may reach given the underlying terrain, and calculating the worst-case vehicle configuration associated with those extreme wheel heights. The bounds are guaranteed to be conservative, hence ensuring vehicle safety during autonomous navigation. ACE is planned to be used as part of the new onboard path planner of the Mars 2020 rover. This paper describes the algorithm in detail and validates our claim of conservatism and fast computation through experiments. <|reference_end|>",
"<|reference_start|> Using Occupancy Grids for Mobile Robot Perception and navigation: An approach to robot perception and world modeling that uses a probabilistic tesselated representation of spatial information called the occupancy grid is reviewed. The occupancy grid is a multidimensional random field that maintains stochastic estimates of the occupancy state of the cells in a spatial lattice. To construct a sensor-derived map of the robot's world, the cell state estimates are obtained by interpreting the incoming range readings using probabilistic sensor models. Bayesian estimation procedures allow the incremental updating of the occupancy grid, using readings taken from several sensors over multiple points of view. The use of occupancy grids from mapping and for navigation is examined. Operations on occupancy grids and extensions of the occupancy grid framework are briefly considered.<<ETX>> <|reference_end|>",
"<|reference_start|> Design and comparative evaluation of an iterative contact point estimation method for static stability estimation of mobile actively reconfigurable robots: <|reference_end|>",
"<|reference_start|> Pose Estimation of Vehicles Over Uneven Terrain: This paper presents a method for pose estimation of off-road vehicles moving over uneven terrain. It determines the contact points between the wheels and the terrain, assuming rigid contacts between an arbitrary number of wheels and ground. The terrain is represented by a 3D points cloud, interpolated by a B-patch to provide a continuous terrain representation. The pose estimation problem is formulated as a rigid body contact problem for a given location of the vehicle's center of mass over the terrain and a given yaw angle. The contact points between the wheels and ground are determined by releasing the vehicle from a given point above the terrain, until the contact forces between the wheels and ground, and the gravitational force, reach equilibrium. The contact forces are calculated using singular value decomposition (SVD) of the deduced contact matrix. The proposed method is computationally efficient, allowing real time computation during motion, as demonstrated in several examples. Accurate pose estimations can be used for motion planning, stability analyses and traversability analyses over uneven terrain. <|reference_end|>"
] | [
1,
9,
11,
15
] | {"<|cite_1|>": "ss-1167587", "<|cite_2|>": "arxiv-167846", "<|cite_3|>": "arxiv-478630", "<|cite_4|>": "ss-1167588", "<|cite_5|>": "ss-1167589", "<|cite_6|>": "ss-1441111", "<|cite_7|>": "ss-1286444", "<|cite_8|>": "arxiv-109893", "<|cite_9|>": "ss-820448", "<|cite_10|>": "ss-1441111", "<|cite_11|>": "ss-1286444", "<|cite_12|>": "ss-1167590", "<|cite_13|>": "ss-1167588", "<|cite_14|>": "ss-1167590", "<|cite_15|>": "ss-1167591", "<|cite_16|>": "arxiv-194133", "<|cite_17|>": "arxiv-317344", "<|cite_18|>": "arxiv-167846", "<|cite_19|>": "ss-1167590", "<|cite_20|>": "ss-1167592", "<|cite_21|>": "ss-1167590", "<|cite_22|>": "ss-1167593"} |
1911.12012-0 | <|paper_start|> Title: Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness
Abstract: Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness: We present Uncertainty-aware Cascaded Stereo Network (UCS-Net) for 3D reconstruction from multiple RGB images. Multi-view stereo (MVS) aims to reconstruct fine-grained scene geometry from multi-view images. Previous learning-based MVS methods estimate per-view depth using plane sweep volumes with a fixed depth hypothesis at each plane; this generally requires densely sampled planes for desired accuracy, and it is very hard to achieve high-resolution depth. In contrast, we propose adaptive thin volumes (ATVs); in an ATV, the depth hypothesis of each plane is spatially varying, which adapts to the uncertainties of previous per-pixel depth predictions. Our UCS-Net has three stages: the first stage processes a small standard plane sweep volume to predict low-resolution depth; two ATVs are then used in the following stages to refine the depth with higher resolution and higher accuracy. Our ATV consists of only a small number of planes; yet, it efficiently partitions local depth ranges within learned small intervals. In particular, we propose to use variance-based uncertainty estimates to adaptively construct ATVs; this differentiable process introduces reasonable and fine-grained spatial partitioning. Our multi-stage framework progressively subdivides the vast scene space with increasing depth resolution and precision, which enables scene reconstruction with high completeness and accuracy in a coarse-to-fine fashion. We demonstrate that our method achieves superior performance compared with state-of-the-art benchmarks on various challenging datasets.
Introduction
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/teaser_final.pdf}
\caption{
Our UCS-Net leverages adaptive thin volumes (ATVs) to progressively
reconstruct a highly accurate high-resolution depth map through multiple stages.
We show the input RGB image, depth predictions with increasing sizes from three stages, and our final point cloud reconstruction
obtained by fusing multiple depth maps.
We also show local 2D slices of our ATVs around a pixel (red dot).
Note that, our ATVs become thinner after a stage because of reduced uncertainty.
}
\label{fig:teaser}
\vspace{-6mm}
\end{figure}
Inferring 3D scene geometry from captured images is a core problem in computer vision and graphics
with applications in 3D visualization, scene understanding, robotics and autonomous driving.
Multi-view stereo (MVS) aims to reconstruct dense 3D representations from multiple images with calibrated cameras.
Inspired by the success of deep convolutional neural networks (CNN),
several learning-based MVS methods have been presented <|cite_start|> (Reference: SurfaceNet: An End-to-end 3D Neural Network for Multiview Stereopsis: This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark.) <|cite_end|> <|cite_start|> (Reference: Learning a Multi-View Stereo Machine: We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches as well as recent learning based methods.) <|cite_end|> <|cite_start|> (Reference: DeepMVS: Learning Multi-view Stereopsis: We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of plane-sweep volumes and use the proposed DeepMVS network to predict high-quality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multi-layer feature activations from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures.) <|cite_end|> <|cite_start|> (Reference: BA-Net: Dense Bundle Adjustment Network: This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method.) <|cite_end|>;
the most recent work leverages cost volumes in a learning pipeline <|cite_start|> (Reference: MVSNet: Depth Inference for Unstructured Multi-view Stereo: We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.) <|cite_end|> <|cite_start|> (Reference: DPSNet: End-to-end Deep Plane Sweep Stereo: Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches for dense depth reconstruction. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the dense depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.) <|cite_end|>,
and outperforms many traditional MVS methods <|cite_start|> (Reference: {Accurate, Dense, and Robust Multiview Stereopsis: This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and "crowded" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.) <|cite_end|>.
At the core of the recent success on MVS <|cite_start|> (Reference: MVSNet: Depth Inference for Unstructured Multi-view Stereo: We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.) <|cite_end|> <|cite_start|> (Reference: DPSNet: End-to-end Deep Plane Sweep Stereo: Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches for dense depth reconstruction. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the dense depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.) <|cite_end|>is the application of 3D CNNs on plane sweep cost volumes
to effectively infer multi-view correspondence.
However, such 3D CNNs involve massive memory usage for depth estimation with high accuracy and completeness.
In particular, for a large scene, high accuracy requires sampling a large number of sweeping planes and
high completeness requires reconstructing high-resolution depth maps.
In general, given limited memory, there is an undesired trade-off between accuracy (more planes) and completeness (more pixels) in previous work <|cite_start|> (Reference: MVSNet: Depth Inference for Unstructured Multi-view Stereo: We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.) <|cite_end|> <|cite_start|> (Reference: DPSNet: End-to-end Deep Plane Sweep Stereo: Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches for dense depth reconstruction. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the dense depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.) <|cite_end|>.
\comment{
For example, to ensure enough accuracy, MVSNet <|cite_start|> (Reference: MVSNet: Depth Inference for Unstructured Multi-view Stereo: We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.) <|cite_end|>leverages a dense set of sweeping planes -- 256 planes;
however, the network can only reconstruct depth maps with a resolution that is just $1/16$ to the original image resolution (i.e. $1/4$ to each dimension),
which limits the completeness of the reconstruction.
In general, it is highly challenging to reconstruct high-resolution depth without losing accuracy.
}
Our goal is to achieve \textit{highly accurate and highly complete reconstruction} with \textit{low memory and computation consumption} at the same time.
To do so, we propose a novel learning-based uncertainty-aware multi-view stereo framework,
which utilizes multiple small volumes, instead of a large standard plane sweep volume,
to progressively regress high-quality depth in a coarse-to-fine fashion.
A key in our method is that
we propose novel adaptive thin volumes (ATVs, see Fig.~\ref{fig:teaser}) to achieve efficient spatial partitioning.
Specifically, we propose a novel cascaded network with three stages (see Fig.~\ref{fig:ucnet}):
\comment{
\erran{different resolution means different depth resolution here? need to be explicit since we have image resolution as well}
}
each stage of the cascade predicts a depth map with a different size;
each following stage constructs an ATV to refine the predicted depth from the previous stage with higher pixel resolution and finer depth partitioning.
The first stage uses a small standard plane sweep volume with low image resolution and relatively sparse depth planes
-- 64 planes that are fewer than the number of planes (256 or 512) in previous work <|cite_start|> (Reference: MVSNet: Depth Inference for Unstructured Multi-view Stereo: We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.) <|cite_end|> <|cite_start|> (Reference: Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference: Deep learning has recently demonstrated its excellent performance for multi-view stereo (MVS). However, one major limitation of current learned MVS approaches is the scalability: the memory-consuming cost volume regularization makes the learned MVS hard to be applied to high-resolution scenes. In this paper, we introduce a scalable multi-view stereo framework based on the recurrent neural network. Instead of regularizing the entire 3D cost volume in one go, the proposed Recurrent Multi-view Stereo Network (R-MVSNet) sequentially regularizes the 2D cost maps along the depth direction via the gated recurrent unit (GRU). This reduces dramatically the memory consumption and makes high-resolution reconstruction feasible. We first show the state-of-the-art performance achieved by the proposed R-MVSNet on the recent MVS benchmarks. Then, we further demonstrate the scalability of the proposed method on several large-scale scenarios, where previous learned approaches often fail due to the memory constraint. Code is available at https://github.com/YoYo000/MVSNet.) <|cite_end|>;
the following two stages use ATVs with higher image resolutions and significantly fewer depth planes -- only 32 and 8 planes.
While consisting of a very small number of planes, our ATVs are constructed within \emph{learned local depth ranges},
which enables \emph{efficient and fine-grained spatial partitioning} for accurate and complete depth reconstruction.
This is made possible by the novel uncertainty-aware construction of an ATV.
In particular, we leverage the variances of the predicted per-pixel depth probabilities,
and infer the uncertainty intervals (as shown in Fig.~\ref{fig:teaser})
by calculating variance-based confidence intervals of the per-pixel probability distributions for the ATV construction.
Specifically, we apply the previously predicted depth map as a central curved plane,
and construct an ATV around the central plane within local per-pixel uncertainty intervals.
In this way, we explicitly express the uncertainty of the depth prediction at one stage,
and embed this knowledge into the input volume for the next stage.
Our variance-based uncertainty estimation is differentiable and
we train our UCSNet from end to end with depth supervision for the predicted depths from all three stages.
Our network can thus learn to optimize the estimated uncertainty intervals,
to make sure that an ATV is constructed with proper depth coverage that is both large enough -- to try to cover ground truth depth -- and small enough
-- to enable accurate reconstruction for the following stages.
Overall, our multi-stage framework can
progressively sub-divide the local space at a finer scale in a reasonable way, which leads to high-quality depth reconstruction.
We demonstrate that our novel UCS-Net outperforms the state-of-the-art learning-based MVS methods on various datasets.
\comment{Our method is able
Our framework is computation- and memory- efficient, in which every single stage deals with a small volume with either small image-wise resolution or small depth-wise resolution.
More importantly, our coarse-to-fine framework enables the following stages to
progressively sub-divide the local space at a finer scale and achieve better depth reconstruction.}
Related Work
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{images/network_v2.pdf}
\caption{
Overview of our UCS-Net.
Our UCS-Net leverages multi-scale cost volumes to achieve coarse-to-fine depth prediction with three cascade stages.
The cost volumes are constructed using multi-scale deep image features from a multi-scale feature extractor.
The last two stages utilize the uncertainty of the previous depth prediction to build adaptive thin volumes (ATVs)
for depth reconstruction at a finer scale. We mark different parts of the network in different colors.
Please refer to Sec~\ref{sec:method} and the corresponding subsections for more details.
}
\label{fig:ucnet}
\end{figure*}
Multi-view stereo is a long-studied vision problem with many traditional approaches <|cite_start|> (Reference: A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms: This paper presents a quantitative comparison of several multi-view stereo reconstruction algorithms. Until now, the lack of suitable calibrated multi-view image datasets with known ground truth (3D shape models) has prevented such direct comparisons. In this paper, we first survey multi-view stereo algorithms and compare them qualitatively using a taxonomy that differentiates their key properties. We then describe our process for acquiring and calibrating multiview image datasets with high-accuracy ground truth and introduce our evaluation methodology. Finally, we present the results of our quantitative comparison of state-of-the-art multi-view stereo reconstruction algorithms on six benchmark datasets. The datasets, evaluation details, and instructions for submitting new models are available online at http://vision.middlebury.edu/mview.) <|cite_end|> <|cite_start|> (Reference: Variational stereovision and 3D scene flow estimation with statistical similarity measures: We present a common variational framework for dense depth recovery and dense three-dimensional motion field estimation from multiple video sequences, which is robust to camera spectral sensitivity differences and illumination changes. For this purpose, we first show that both problems reduce to a generic image matching problem after backprojecting the input images onto suitable surfaces. We then solve this matching problem in the case of statistical similarity criteria that can handle frequently occurring nonaffine image intensities dependencies. Our method leads to an efficient and elegant implementation based on fast recursive filters. We obtain good results on real images.) <|cite_end|> <|cite_start|> (Reference: A Theory of Shape by Space Carving: ) <|cite_end|> <|cite_start|> (Reference: Multi-camera Scene Reconstruction via Graph Cuts: ) <|cite_end|> <|cite_start|> (Reference: Handling occlusions in dense multi-view stereo: While stereo matching was originally formulated as the recovery of 3D shape from a pair of images, it is now generally recognized that using more than two images can dramatically improve the quality of the reconstruction. Unfortunately, as more images are added, the prevalence of semi-occluded regions (pixels visible in some but not all images) also increases. We propose some novel techniques to deal with this problem. Our first idea is to use a combination of shiftable windows and a dynamically selected subset of the neighboring images to do the matches. Our second idea is to explicitly label occluded pixels within a global energy minimization framework, and to reason about visibility within this framework so that only truly visible pixels are matched. Experimental results show a dramatic improvement using the first idea over conventional multibaseline stereo, especially when used in conjunction with a global energy minimization technique. These results also show that explicit occlusion labeling and visibility reasoning do help, but not significantly, if the spatial and temporal selection is applied first.) <|cite_end|> <|cite_start|> (Reference: Silhouette and stereo fusion for 3D object modeling: We present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multistereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multigrid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.) <|cite_end|> <|cite_start|> (Reference: Poxels: Probabilistic voxelized volume reconstruction: This paper examines the problem of reconstructing a voxelized representation of 3D space from a series of images. An iterative algorithm is used to find the scene model which jointly explains all the observed images by determining which region of space is responsible for each of the observations. The current approach formulates the problem as one of optimization over estimates of these responsibilities. The process converges to a distribution of responsibility which accurately reflects the constraints provided by the observations, the positions and shape of both solid and transparent objects, and the uncertainty which remains. Reconstruction is robust, and gracefully represents regions of space in which there is little certainty about the exact structure due to limited, non-existent, or contradicting data. Rendered images of voxel spaces recovered from synthetic and real observation images are shown.) <|cite_end|> <|cite_start|> (Reference: {Accurate, Dense, and Robust Multiview Stereopsis: This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and "crowded" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.) <|cite_end|> <|cite_start|> (Reference: Pixelwise View Selection for Unstructured Multi-View Stereo: ) <|cite_end|>.
Our learning-based framework leverages the novel spatial representation, ATV to reconstruct high-quality depth for fine-grain scene reconstruction.
In this work, we mainly discuss spatial representation for 3D reconstruction and deep learning based multi-view stereo.
\noindent\textbf{Spatial Representation for 3D Reconstruction. }
Existing methods can be categorized based on learned 3D representations.
Volumetric based approaches partition the space into a regular 3D volume with millions of small voxels <|cite_start|> (Reference: SurfaceNet: An End-to-end 3D Neural Network for Multiview Stereopsis: This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark.) <|cite_end|> <|cite_start|> (Reference: Learning a Multi-View Stereo Machine: We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches as well as recent learning based methods.) <|cite_end|> <|cite_start|> (Reference: Learning Shape Priors for Single-View 3D Completion and Reconstruction: The problem of single-view 3D shape completion or reconstruction is challenging, because among the many possible shapes that explain an observation, most are implausible and do not correspond to natural objects. Recent research in the field has tackled this problem by exploiting the expressiveness of deep convolutional networks. In fact, there is another level of ambiguity that is often overlooked: among plausible shapes, there are still multiple shapes that fit the 2D image equally well; i.e., the ground truth shape is non-deterministic given a single-view input. Existing fully supervised approaches fail to address this issue, and often produce blurry mean shapes with smooth surfaces but no fine details. In this paper, we propose ShapeHD, pushing the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors. The learned priors serve as a regularizer, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth. Our design thus overcomes both levels of ambiguity aforementioned. Experiments demonstrate that ShapeHD outperforms state of the art by a large margin in both shape completion and shape reconstruction on multiple real datasets.) <|cite_end|> <|cite_start|> (Reference: Learning to Reconstruct Shapes from Unseen Classes: From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.) <|cite_end|> <|cite_start|> (Reference: Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers: In this paper, we develop novel, efficient 2D encodings for 3D geometry, which enable reconstructing full 3D shapes from a single image at high resolution. The key idea is to pose 3D shape reconstruction as a 2D prediction problem. To that end, we first develop a simple baseline network that predicts entire voxel tubes at each pixel of a reference view. By leveraging well-proven architectures for 2D pixel-prediction tasks, we attain state-of-the-art results, clearly outperforming purely voxel-based approaches. We scale this baseline to higher resolutions by proposing a memory-efficient shape encoding, which recursively decomposes a 3D shape into nested shape layers, similar to the pieces of a Matryoshka doll. This allows reconstructing highly detailed shapes with complex topology, as demonstrated in extensive experiments; we clearly outperform previous octree-based approaches despite having a much simpler architecture using standard network components. Our Matryoshka networks further enable reconstructing shapes from IDs or shape similarity, as well as shape sampling.) <|cite_end|>, and the network predicts if a voxel is on the surface or not.
Ray tracing can be incorporated into this voxelized structure <|cite_start|> (Reference: Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency: We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.) <|cite_end|> <|cite_start|> (Reference: RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials: In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.) <|cite_end|> <|cite_start|> (Reference: Towards probabilistic volumetric reconstruction using ray potentials: This paper presents a novel probabilistic foundation for volumetric 3D reconstruction. We formulate the problem as inference in a Markov random field, which accurately captures the dependencies between the occupancy and appearance of each voxel, given all input images. Our main contribution is an approximate highly parallelized discrete-continuous inference algorithm to compute the marginal distributions of each voxel's occupancy and appearance. In contrast to the MAP solution, marginals encode the underlying uncertainty and ambiguity in the reconstruction. Moreover, the proposed algorithm allows for a Bayes optimal prediction with respect to a natural reconstruction loss. We compare our method to two state-of-the-art volumetric reconstruction algorithms on three challenging aerial datasets with LIDAR ground truth. Our experiments demonstrate that the proposed algorithm compares favorably in terms of reconstruction accuracy and the ability to expose reconstruction uncertainty.) <|cite_end|>.
The main drawback of these methods is computation and memory inefficiency, given that most voxels are not on the surface.
Researchers have also tried to reconstruct point clouds <|cite_start|> (Reference: Unsupervised Learning of Shape and Pose with Differentiable Point Clouds: We address the problem of learning accurate 3D shape and camera pose from a collection of unlabeled category-specific images. We train a convolutional network to predict both the shape and the pose from a single image by minimizing the reprojection error: given several views of an object, the projections of the predicted shapes to the predicted camera poses should match the provided views. To deal with pose ambiguity, we introduce an ensemble of pose predictors which we then distill to a single "student" model. To allow for efficient learning of high-fidelity shapes, we represent the shapes by point clouds and devise a formulation allowing for differentiable projection of these. Our experiments show that the distilled ensemble of pose predictors learns to estimate the pose accurately, while the point cloud representation allows to predict detailed shape models. The supplementary video can be found at https://www.youtube.com/watch?v=LuIGovKeo60) <|cite_end|> <|cite_start|> (Reference: {Accurate, Dense, and Robust Multiview Stereopsis: This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and "crowded" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.) <|cite_end|> <|cite_start|> (Reference: A quasi-dense approach to surface reconstruction from uncalibrated images: This paper proposes a quasi-dense approach to 3D surface model acquisition from uncalibrated images. First, correspondence information and geometry are computed based on new quasi-dense point features that are resampled subpixel points from a disparity map. The quasi-dense approach gives more robust and accurate geometry estimations than the standard sparse approach. The robustness is measured as the success rate of full automatic geometry estimation with all involved parameters fixed. The accuracy is measured by a fast gauge-free uncertainty estimation algorithm. The quasi-dense approach also works for more largely separated images than the sparse approach, therefore, it requires fewer images for modeling. More importantly, the quasi-dense approach delivers a high density of reconstructed 3D points on which a surface representation can be reconstructed. This fills the gap of insufficiency of the sparse approach for surface reconstruction, essential for modeling and visualization applications. Second, surface reconstruction methods from the given quasi-dense geometry are also developed. The algorithm optimizes new unified functionals integrating both 3D quasi-dense points and 2D image information, including silhouettes. Combining both 3D data and 2D images is more robust than the existing methods using only 2D information or only 3D data. An efficient bounded regularization method is proposed to implement the surface evolution by level-set methods. Its properties are discussed and proven for some cases. As a whole, a complete automatic and practical system of 3D modeling from raw images captured by hand-held cameras to surface representation is proposed. Extensive experiments demonstrate the superior performance of the quasi-dense approach with respect to the standard sparse approach in robustness, accuracy, and applicability.) <|cite_end|> <|cite_start|> (Reference: MVPNet: Multi-View Point Regression Networks for 3D Object Reconstruction from A Single Image: In this paper, we address the problem of reconstructing an object's surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.) <|cite_end|> <|cite_start|> (Reference: Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction: Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.) <|cite_end|> <|cite_start|> (Reference: Learning Representations and Generative Models for 3D Point Clouds: Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.) <|cite_end|>, however the high dimensionality of a point cloud often results in noisy outliers since a point cloud does not efficiently encode connectivity between points.
Some recent works utilize single or multiple images to reconstruct a point cloud given strong shape priors <|cite_start|> (Reference: A Point Set Generation Network for 3D Object Reconstruction from a Single Image: Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -- point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3d reconstruction benchmarks; but it also shows a strong performance for 3d shape completion and promising ability in making multiple plausible predictions.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Learning of Shape and Pose with Differentiable Point Clouds: We address the problem of learning accurate 3D shape and camera pose from a collection of unlabeled category-specific images. We train a convolutional network to predict both the shape and the pose from a single image by minimizing the reprojection error: given several views of an object, the projections of the predicted shapes to the predicted camera poses should match the provided views. To deal with pose ambiguity, we introduce an ensemble of pose predictors which we then distill to a single "student" model. To allow for efficient learning of high-fidelity shapes, we represent the shapes by point clouds and devise a formulation allowing for differentiable projection of these. Our experiments show that the distilled ensemble of pose predictors learns to estimate the pose accurately, while the point cloud representation allows to predict detailed shape models. The supplementary video can be found at https://www.youtube.com/watch?v=LuIGovKeo60) <|cite_end|> <|cite_start|> (Reference: Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction: Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.) <|cite_end|>, which cannot be directly extended to large-scale scene reconstruction.
Recent work also tried to directly reconstruct surface meshes <|cite_start|> (Reference: From Point Clouds to Mesh Using Regression: Surface reconstruction from a point cloud is a standard subproblem in many algorithms for dense 3D reconstruction from RGB images or depth maps. Methods, performing only local operations in the vicinity of individual points, are very fast, but reconstructed models typically contain lots of holes. On the other hand, regularized volumetric approaches, formulated as a global optimization, are typically too slow for real-time interactive applications. We propose to use a regression forest based method, which predicts the projection of a grid point to the surface, depending on the spatial configuration of point density in the grid point neighborhood. We designed a suitable feature vector and efficient oct-tree based GPU evaluation, capable of predicting surface of high resolution 3D models in milliseconds. Our method learns and predicts surfaces from an observed point cloud sparser than the evaluation grid, and therefore effectively acts as a regularizer.) <|cite_end|> <|cite_start|> (Reference: Learning Category-Specific Mesh Reconstruction from Image Collections: We present a learning framework for recovering the 3D shape, camera, and texture of an object from a single image. The shape is represented as a deformable 3D mesh model of an object category where a shape is parameterized by a learned mean shape and per-instance predicted deformation. Our approach allows leveraging an annotated image collection for training, where the deformable model and the 3D prediction mechanism are learned without relying on ground-truth 3D or multi-view supervision. Our representation enables us to go beyond existing 3D prediction approaches by incorporating texture inference as prediction of an image in a canonical appearance space. Additionally, we show that semantic keypoints can be easily associated with the predicted shapes. We present qualitative and quantitative results of our approach on CUB and PASCAL3D datasets and show that we can learn to predict diverse shapes and textures across objects using only annotated image collections. The project website can be found at https://akanazawa.github.io/cmr/.) <|cite_end|> <|cite_start|> (Reference: Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images: We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Learning single-image 3D reconstruction by generative modelling of shape, pose and shading: We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.) <|cite_end|> <|cite_start|> (Reference: SurfNet: Generating 3D shape surfaces using deep residual networks: 3D shape models are naturally parameterized using vertices and faces, \ie, composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent `geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.) <|cite_end|> <|cite_start|> (Reference: Neural 3D Mesh Renderer: For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.) <|cite_end|>, deformable shapes <|cite_start|> (Reference: End-to-end Recovery of Human Shape and Pose: We describe Human Mesh Recovery (HMR), an end-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most current methods that compute 2D or 3D joint locations, we produce a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. The main objective is to minimize the reprojection loss of keypoints, which allow our model to be trained using images in-the-wild that only have ground truth 2D annotations. However, the reprojection loss alone leaves the model highly under constrained. In this work we address this problem by introducing an adversary trained to tell whether a human body parameter is real or not using a large database of 3D human meshes. We show that HMR can be trained with and without using any paired 2D-to-3D supervision. We do not rely on intermediate 2D keypoint detections and infer 3D pose and shape parameters directly from image pixels. Our model runs in real-time given a bounding box containing the person. We demonstrate our approach on various images in-the-wild and out-perform previous optimization based methods that output 3D meshes and show competitive results on tasks such as 3D joint location estimation and part segmentation.) <|cite_end|> <|cite_start|> (Reference: Learning Category-Specific Mesh Reconstruction from Image Collections: We present a learning framework for recovering the 3D shape, camera, and texture of an object from a single image. The shape is represented as a deformable 3D mesh model of an object category where a shape is parameterized by a learned mean shape and per-instance predicted deformation. Our approach allows leveraging an annotated image collection for training, where the deformable model and the 3D prediction mechanism are learned without relying on ground-truth 3D or multi-view supervision. Our representation enables us to go beyond existing 3D prediction approaches by incorporating texture inference as prediction of an image in a canonical appearance space. Additionally, we show that semantic keypoints can be easily associated with the predicted shapes. We present qualitative and quantitative results of our approach on CUB and PASCAL3D datasets and show that we can learn to predict diverse shapes and textures across objects using only annotated image collections. The project website can be found at https://akanazawa.github.io/cmr/.) <|cite_end|>, and some learned implicit distance functions <|cite_start|> (Reference: Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis: We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution -- but complete -- output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data.) <|cite_end|> <|cite_start|> (Reference: OctNetFusion: Learning Depth Fusion from Data: In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.) <|cite_end|> <|cite_start|> (Reference: Occupancy Networks: Learning 3D Reconstruction in Function Space: With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.) <|cite_end|> <|cite_start|> (Reference: Learning Implicit Fields for Generative Shape Modeling: We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. IM-NET is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and shape generation (via IM-GAN), we demonstrate superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Code and supplementary material are available at https://github.com/czq142857/implicit-decoder.) <|cite_end|>. These reconstructed surfaces often look smoother than point-cloud-based approaches, but often lack high-frequency details.
A depth map represents dense 3D information that is perfectly aligned with a reference view; depth reconstruction has been demonstrated in many previous works on reconstruction with both single view <|cite_start|> (Reference: Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture: In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.) <|cite_end|> <|cite_start|> (Reference: DeMoN: Depth and Motion Network for Learning Monocular Stereo: In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.) <|cite_end|> <|cite_start|> (Reference: Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue: A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Monocular Depth Estimation with Left-Right Consistency: Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Exploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Learning of Depth and Ego-Motion from Video: We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.) <|cite_end|>and multiple views <|cite_start|> (Reference: Using Multiple Hypotheses to Improve Depth-Maps for Multi-View Stereo: ) <|cite_end|> <|cite_start|> (Reference: Machine Vision and Applications Efficient Large-scale Multi-view Stereo for Ultra High-resolution Image Sets: We present a new approach for large-scale multi-view stereo matching, which is designed to operate on ultra high-resolution image sets and efficiently compute dense 3D point clouds. We show that, using a robust descrip-tor for matching purposes and high-resolution images, we can skip the computationally expensive steps that other algorithms require. As a result, our method has low memory requirements and low computational complexity while producing 3D point clouds containing virtually no outliers. This makes it exceedingly suitable for large-scale reconstruction. The core of our algorithm is the dense matching of image pairs using DAISY descriptors, implemented so as to eliminate redundancies and optimize memory access. We use a variety of challenging data sets to validate and compare our results against other algorithms.) <|cite_end|> <|cite_start|> (Reference: Learned Multi-Patch Similarity: Estimating a depth map from multiple views of a scene is a fundamental task in computer vision. As soon as more than two viewpoints are available, one faces the very basic question how to measure similarity across >2 image patches. Surprisingly, no direct solution exists, instead it is common to fall back to more or less robust averaging of two-view similarities. Encouraged by the success of machine learning, and in particular convolutional neural networks, we propose to learn a matching function which directly maps multiple image patches to a scalar similarity score. Experiments on several multi-view datasets demonstrate that this approach has advantages over methods based on pairwise patch similarity.) <|cite_end|> <|cite_start|> (Reference: Massively parallel multiview stereopsis by surface normal diffusion: We present a new, massively parallel method for high-quality multiview matching. Our work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized. Our main novelties are on the one hand to formulate Patchmatch in scene space, which makes it possible to aggregate image similarity across multiple views and obtain more accurate depth maps. And on the other hand a modified, diffusion-like propagation scheme that can be massively parallelized and delivers dense multiview correspondence over ten 1.9-Megapixel images in 3 seconds, on a consumer-grade GPU. Our method uses a slanted support window and thus has no fronto-parallel bias, it is completely local and parallel, such that computation time scales linearly with image size, and inversely proportional to the number of parallel threads. Furthermore, it has low memory footprint (four values per pixel, independent of the depth range). It therefore scales exceptionally well and can handle multiple large images at high depth resolution. Experiments on the DTU and Middlebury multiview datasets as well as oblique aerial images show that our method achieves very competitive results with high accuracy and completeness, across a range of different scenarios.) <|cite_end|> <|cite_start|> (Reference: Pixelwise View Selection for Unstructured Multi-View Stereo: ) <|cite_end|> <|cite_start|> (Reference: Relative camera refinement for accurate dense reconstruction: Multi-view stereo (MVS) depends on the pre-determined camera geometry, often from structure from motion (SfM) or simultaneous localization and mapping (SLAM). However, cameras may not be locally optimal for dense stereo matching, especially when it comes from the large scale SfM or the SLAM with multiple sensor fusion. In this paper, we propose a local camera refinement approach for accurate dense reconstruction. Firstly, we refines the relative geometry of independent camera pair using a tailored bundle adjustment. The refinement is also extended to a multi-view version for general MVS reconstructions. Then, the non-rigid dense alignment is formulated as an inverse-distortion problem to transfer point clouds from each local coordinate system to a global coordinate system. The proposed framework has been intensively validated in both SfM and SLAM based dense reconstructions. Results on different datasets show that our method can significantly improve the dense reconstruction quality.) <|cite_end|> <|cite_start|> (Reference: Pixelwise View Selection for Unstructured Multi-View Stereo: ) <|cite_end|>.
Some of them leverage normal information as well <|cite_start|> (Reference: Massively parallel multiview stereopsis by surface normal diffusion: We present a new, massively parallel method for high-quality multiview matching. Our work builds on the Patchmatch idea: starting from randomly generated 3D planes in scene space, the best-fitting planes are iteratively propagated and refined to obtain a 3D depth and normal field per view, such that a robust photo-consistency measure over all images is maximized. Our main novelties are on the one hand to formulate Patchmatch in scene space, which makes it possible to aggregate image similarity across multiple views and obtain more accurate depth maps. And on the other hand a modified, diffusion-like propagation scheme that can be massively parallelized and delivers dense multiview correspondence over ten 1.9-Megapixel images in 3 seconds, on a consumer-grade GPU. Our method uses a slanted support window and thus has no fronto-parallel bias, it is completely local and parallel, such that computation time scales linearly with image size, and inversely proportional to the number of parallel threads. Furthermore, it has low memory footprint (four values per pixel, independent of the depth range). It therefore scales exceptionally well and can handle multiple large images at high depth resolution. Experiments on the DTU and Middlebury multiview datasets as well as oblique aerial images show that our method achieves very competitive results with high accuracy and completeness, across a range of different scenarios.) <|cite_end|> <|cite_start|> (Reference: Just look at the image: Viewpoint-specific surface normal prediction for improved multi-view reconstruction: We present a multi-view reconstruction method that combines conventional multi-view stereo (MVS) with appearance-based normal prediction, to obtain dense and accurate 3D surface models. Reliable surface normals reconstructed from multi-view correspondence serve as training data for a convolutional neural network (CNN), which predicts continuous normal vectors from raw image patches. By training from known points in the same image, the prediction is specifically tailored to the materials and lighting conditions of the particular scene, as well as to the precise camera viewpoint. It is therefore a lot easier to learn than generic single-view normal estimation. The estimated normal maps, together with the known depth values from MVS, are integrated to dense depth maps, which in turn are fused into a 3D model. Experiments on the DTU dataset show that our method delivers 3D reconstructions with the same accuracy as MVS, but with significantly higher completeness.) <|cite_end|>.
\comment{Despite this advantage, depth value correlates with camera extrinsic parameters, making accurate estimation harder to achieve.
}
In this paper, we present ATV, a novel spatial representation for depth estimation; we use two ATVs to progressively partition local space, which is the key to achieve coarse-to-fine reconstruction.
\noindent\textbf{Deep Multi-View Stereo (MVS). } The traditional MVS pipeline mainly relies on photo-consistency constraints to infer the underlying 3D geometry, but usually performs poorly on texture-less or occluded areas, or under complex lighting environments.
To overcome such limitations, many deep learning-based MVS methods have emerged in the last two years, including regression-based approaches <|cite_start|> (Reference: MVSNet: Depth Inference for Unstructured Multi-view Stereo: We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.) <|cite_end|> <|cite_start|> (Reference: DPSNet: End-to-end Deep Plane Sweep Stereo: Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches for dense depth reconstruction. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the dense depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.) <|cite_end|>, classification-based approaches | [
"<|reference_start|> Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference: Deep learning has recently demonstrated its excellent performance for multi-view stereo (MVS). However, one major limitation of current learned MVS approaches is the scalability: the memory-consuming cost volume regularization makes the learned MVS hard to be applied to high-resolution scenes. In this paper, we introduce a scalable multi-view stereo framework based on the recurrent neural network. Instead of regularizing the entire 3D cost volume in one go, the proposed Recurrent Multi-view Stereo Network (R-MVSNet) sequentially regularizes the 2D cost maps along the depth direction via the gated recurrent unit (GRU). This reduces dramatically the memory consumption and makes high-resolution reconstruction feasible. We first show the state-of-the-art performance achieved by the proposed R-MVSNet on the recent MVS benchmarks. Then, we further demonstrate the scalability of the proposed method on several large-scale scenarios, where previous learned approaches often fail due to the memory constraint. Code is available at https://github.com/YoYo000/MVSNet. <|reference_end|>",
"<|reference_start|> Multi-camera Scene Reconstruction via Graph Cuts: <|reference_end|>",
"<|reference_start|> Unsupervised Learning of Depth and Ego-Motion from Video: We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings. <|reference_end|>",
"<|reference_start|> Pixelwise View Selection for Unstructured Multi-View Stereo: <|reference_end|>"
] | [
13,
17,
56,
63
] | {"<|multi_cite_1_1|>": "arxiv-131259", "<|multi_cite_1_2|>": "arxiv-132211", "<|multi_cite_1_4|>": "arxiv-153592", "<|multi_cite_1_5|>": "arxiv-162293", "<|multi_cite_2_1|>": "arxiv-154091", "<|multi_cite_2_2|>": "arxiv-202383", "<|cite_3|>": "ss-837764", "<|multi_cite_4_1|>": "arxiv-154091", "<|multi_cite_4_2|>": "arxiv-202383", "<|multi_cite_5_1|>": "arxiv-154091", "<|multi_cite_5_2|>": "arxiv-202383", "<|cite_6|>": "arxiv-154091", "<|multi_cite_7_1|>": "arxiv-154091", "<|multi_cite_7_2|>": "arxiv-193208", "<|multi_cite_8_1|>": "ss-987996", "<|multi_cite_8_2|>": "ss-2488221", "<|multi_cite_8_3|>": "ss-1283227", "<|multi_cite_8_4|>": "ss-922979", "<|multi_cite_8_5|>": "ss-1685917", "<|multi_cite_8_6|>": "ss-1256019", "<|multi_cite_8_7|>": "ss-1294606", "<|multi_cite_8_8|>": "ss-837764", "<|multi_cite_8_9|>": "ss-753982", "<|multi_cite_9_1|>": "arxiv-131259", "<|multi_cite_9_2|>": "arxiv-132211", "<|multi_cite_9_4|>": "arxiv-172600", "<|multi_cite_9_5|>": "arxiv-185968", "<|multi_cite_9_6|>": "arxiv-156723", "<|multi_cite_10_1|>": "arxiv-122171", "<|multi_cite_10_2|>": "arxiv-186659", "<|multi_cite_10_3|>": "ss-1458993", "<|multi_cite_11_1|>": "arxiv-177143", "<|multi_cite_11_2|>": "ss-837764", "<|multi_cite_11_3|>": "ss-1261245", "<|multi_cite_11_4|>": "arxiv-181513", "<|multi_cite_11_5|>": "arxiv-127383", "<|multi_cite_11_6|>": "arxiv-128774", "<|multi_cite_12_1|>": "arxiv-111625", "<|multi_cite_12_2|>": "arxiv-177143", "<|multi_cite_12_3|>": "arxiv-127383", "<|multi_cite_13_1|>": "ss-1973386", "<|multi_cite_13_2|>": "arxiv-152230", "<|multi_cite_13_3|>": "arxiv-153851", "<|multi_cite_13_4|>": "arxiv-188078", "<|multi_cite_13_5|>": "arxiv-118865", "<|multi_cite_13_6|>": "arxiv-140727", "<|multi_cite_14_1|>": "arxiv-143423", "<|multi_cite_14_2|>": "arxiv-152230", "<|multi_cite_15_1|>": "arxiv-111494", "<|multi_cite_15_2|>": "arxiv-120812", "<|multi_cite_15_3|>": "arxiv-183932", "<|multi_cite_15_4|>": "arxiv-183623", "<|multi_cite_16_1|>": "arxiv-68927", "<|multi_cite_16_2|>": "arxiv-112065", "<|multi_cite_16_3|>": "arxiv-94061", "<|multi_cite_16_4|>": "arxiv-105679", "<|multi_cite_16_5|>": "arxiv-122565", "<|multi_cite_17_1|>": "ss-781029", "<|multi_cite_17_2|>": "ss-817363", "<|multi_cite_17_3|>": "arxiv-120042", "<|multi_cite_17_4|>": "ss-1261244", "<|multi_cite_17_5|>": "ss-753982", "<|multi_cite_17_6|>": "ss-1297952", "<|multi_cite_17_7|>": "ss-753982", "<|multi_cite_18_1|>": "ss-1261244", "<|multi_cite_18_2|>": "ss-1016131", "<|multi_cite_19_1|>": "arxiv-154091", "<|multi_cite_19_2|>": "arxiv-202383", "<|cite_20|>": "arxiv-153592", "<|multi_cite_21_1|>": "arxiv-193208", "<|multi_cite_21_2|>": "arxiv-168397", "<|multi_cite_21_3|>": "arxiv-218478", "<|multi_cite_22_1|>": "arxiv-118928", "<|multi_cite_22_2|>": "arxiv-186659", "<|multi_cite_22_3|>": "arxiv-153946", "<|multi_cite_22_4|>": "arxiv-191961", "<|multi_cite_23_1|>": "arxiv-154091", "<|multi_cite_23_2|>": "arxiv-79856", "<|multi_cite_23_3|>": "ss-1263046", "<|cite_24|>": "arxiv-193208", "<|cite_25|>": "arxiv-218478", "<|multi_cite_26_1|>": "arxiv-131259", "<|multi_cite_26_2|>": "arxiv-153592", "<|multi_cite_26_3|>": "arxiv-154091", "<|multi_cite_27_1|>": "arxiv-110187", "<|multi_cite_27_2|>": "arxiv-142258", "<|multi_cite_27_3|>": "arxiv-173499", "<|multi_cite_28_1|>": "arxiv-120728", "<|multi_cite_28_2|>": "arxiv-120182", "<|multi_cite_29_1|>": "ss-1261245", "<|multi_cite_29_2|>": "ss-837764", "<|multi_cite_30_1|>": "arxiv-152230", "<|multi_cite_30_2|>": "arxiv-111494", "<|multi_cite_31_1|>": "arxiv-79856", "<|multi_cite_31_2|>": "arxiv-153592", "<|multi_cite_31_3|>": "arxiv-202383", "<|multi_cite_31_4|>": "arxiv-154091", "<|cite_32|>": "arxiv-168397"} |
1507.07629 | <|paper_start|> Title: Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
Abstract: Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades: Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
Introduction
Benchmarks, challenges, and datasets have played an important role in the maturation of frame-based Computer Vision <|cite_start|> (Reference: Handling imbalanced datasets: a review: Learning classifiers from imbalanced or skewed datasets is an impor- tant topic, arising very often in practice in classification problems. In such problems, almost all the instances are labelled as one class, while far fewer in- stances are labelled as the other class, usually the more important class. It is obvious that traditional classifiers seeking an accurate performance over a full range of instances are not suitable to deal with imbalanced learning tasks, since they tend to classify all the data into the majority class, which is usually the less important class. This paper describes various techniques for handling im- balance dataset problems. Of course, a single article cannot be a complete re- view of all the methods and algorithms, yet we hope that the references cited will cover the major theoretical issues, guiding the researcher in interesting re- search directions and suggesting possible bias combinations that have yet to be explored.) <|cite_end|>. Quantitative evaluation of algorithms on common datasets and using common metrics allows for a fair and direct comparison between works. This ability to directly compare results encourages competition and motivates researchers by giving them a state-of-the-art target to beat. The importance of datasets extends beyond evaluating and comparing algorithms. Datasets also provide easy access to data for researchers, without which they would be required to gather and label their own data, which is a tedious and time-consuming task.
The task of gathering data is especially tedious for those working in Neuromorphic Vision. A lack of publicly available Neuromorphic data means that Neuromorphic researchers must record their own data, which is in contrast to frame-based Computer Vision, where datasets can be constructed by assembling samples from an abundance of publicly accessible images. Although the barrier to acquiring Neuromorphic Vision sensors has recently been lowered significantly by commercialization of sensors by iniLabs \footnote{\url{http://www.inilabs.com/}}, a lack of publicly available Neuromorphic Vision data and datasets persists.
The shortage of good datasets for Neuromorphic Vision is well recognised by the community and is in part a catalyst for the Frontiers special topic in which this paper appears. In a separate article in this same special topic we discuss the characteristics of a good dataset, the roles they have played in frame-based Computer Vision, and how lessons learnt in Computer Vision can help guide the development of Neuromorphic Vision. In this paper we focus on creation of Neuromorphic Vision datasets for object recognition.
An important characteristic of a good dataset is that it should be large and difficult enough to cause an algorithm to ``fail" (achieve significantly less than 100\% accuracy). Achieving 100\% accuracy on a dataset sounds impressive, but it does not adequately describe an algorithm's accuracy, it only provides a lower bound. A more accurate algorithm would also achieve 100\% on the same dataset, so a more difficult dataset is required to distinguish between the two algorithms. To ensure the longevity of a dataset, it should be sufficiently difficult to prevent 100\% accuracy from being achieved even in the face of significant algorithmic improvements.
However, many existing Neuromorphic Vision datasets have not been introduced with the aim of providing a long lived dataset. Rather, they have been introduced as a secondary component of a paper describing a new algorithm <|cite_start|> (Reference: HFirst: A Temporal Approach to Object Recognition: This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous Address Event Representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5\%$\pm$3.5\%) for a previously published four class card pip recognition task and an accuracy of 84.9\%$\pm$1.9\% for a new more difficult 36 class character recognition task.) <|cite_end|> <|cite_start|> (Reference: Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.) <|cite_end|>. These datasets are introduced only to serve the primary purpose of their paper, which is to show how the algorithm performs, and near 100\% accuracy on the dataset is soon achieved by subsequent improved algorithms.
In this paper our primary aim is to introduce two new Neuromorphic Vision datasets with the goal that they will remain useful to the Neuromorphic community for years to come. Although we provide recognition accuracy of existing algorithms on the datasets, we do so only to provide an initial datapoint for future comparisons. We do not concern ourselves with modifying or improving the algorithms in this paper.
Rather than starting from scratch to record our own datasets, we leverage the existence of well established Computer Vision datasets. By converting Computer Vision datasets to Neuromorphic Vision datasets, we save ourselves considerable time and effort in choosing and collecting subject matter. Furthermore, as we show in Section~\ref{sec:conversion_process}, the conversion process can be automated with a Neuromorphic sensor recording live in-the-loop. Using datasets well known to Computer Vision also ensures easier comparison between communities. The two Computer Vision datasets we have chosen are MNIST <|cite_start|> (Reference: gradient-based learning applied to document recognition: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.) <|cite_end|> \footnote{\url{http://yann.lecun.com/exdb/mnist/}} and Caltech101 <|cite_start|> (Reference: Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories: ) <|cite_end|> \footnote{\url{http://www.vision.caltech.edu/Image_Datasets/Caltech101/}}. Each of these datasets is intended to play a different role described below. We use the names ``MNIST" and ``Caltech101" to refer to the original Computer Vision datasets, and the names ``N-MNIST" and ``N-Caltech101" to refer to our Neuromorphic versions.
MNIST contains only 10 different classes, the digits 0-9. The examples in the database are small (28$\times$28 pixels), so it can easily be downloaded, copied, and distributed. The small example size also reduces processing time, allowing for rapid testing and iteration of algorithms when prototyping new ideas. An example of the use of MNIST to explore new ideas can be found in Geoffrey Hinton's online presentation on ``Dark Knowledge" \footnote{\url{https://www.youtube.com/watch?v=EK61htlw8hY}}. We intend for N-MNIST to play a similar role in Neuromorphic Vision and have therefore intentionally kept the recorded examples at the same small scale of 28 $\times$ 28 pixels.
Caltech101 is a much more difficult dataset containing 100 different object classes, plus a background class. The images themselves are much larger, averaging 245 pixels in height and 302 pixels in width. While MNIST can be seen as a scratchpad on which to prototype ideas, Caltech101 provides a far more difficult challenge. We acknowledge that Caltech101 is now considered an easy dataset for Computer Vision given the very advanced state of Computer Vision algorithms, but we foresee it posing a significant challenge to the less mature field of Neuromorphic Vision.
Examples of other early Neuromorphic datasets for recognition include the four class card pip dataset from <|cite_start|> (Reference: Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.) <|cite_end|>, the 36 character dataset from <|cite_start|> (Reference: HFirst: A Temporal Approach to Object Recognition: This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous Address Event Representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5\%$\pm$3.5\%) for a previously published four class card pip recognition task and an accuracy of 84.9\%$\pm$1.9\% for a new more difficult 36 class character recognition task.) <|cite_end|>, the four class silhouette orientation dataset from <|cite_start|> (Reference: Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.) <|cite_end|>, and the 3 class posture dataset from <|cite_start|> (Reference: Feedforward categorization on aer motion events using cortex-like features in a spiking neural network: This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.) <|cite_end|>. Accuracy on these datasets is already high and they each include only a few stimulus samples (less than 100).
Others have attempted conversion of static images to Neuromorphic data, but the conversion images proves difficult because the fundamental principle underlying Neuromorphic sensors is that they respond only to changes in the scene. Some have approached the problem using simulation. <|cite_start|> (Reference: Unsupervised learning of visual features through spike timing dependent plasticity: Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses.) <|cite_end|> assume spike times to be proportional to local image contrast for a static image, while <|cite_start|> (Reference: Real-Time Classification and Sensor Fusion with a Spiking Deep Belief Network: Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.) <|cite_end|> simulate image motion to create a spike sequence. However, simulations do not realistically approximate the noise present in recordings, which can take the form of spurious events, missing events, and variations in event latency.
Arguably the most complete dataset created thus far is the ``MNIST-DVS" dataset \footnote{\url{http://www2.imse-cnm.csic.es/caviar/MNISTDVS.html}}, which is recorded from an actual sensor <|cite_start|> (Reference: A 128$\,\times$ 128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers: Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and processing. They feature unique characteristics such as contrast coding under wide illumination variation, micro-second latency response to fast stimuli, and low output data rates (which greatly improves the efficiency of post-processing stages). They can track extremely fast objects (e.g., time resolution is better than 100 kFrames/s video) without special lighting conditions. Their availability has triggered a new range of vision applications in the fields of surveillance, motion analyses, robotics, and microscopic dynamic observations. One key DVS feature is contrast sensitivity, which has so far been reported to be in the 10-15% range. In this paper, a novel pixel photo sensing and transimpedance pre-amplification stage makes it possible to improve by one order of magnitude contrast sensitivity (down to 1.5%) and power (down to 4 mW), reduce the best reported FPN (Fixed Pattern Noise) by a factor of 2 (down to 0.9%), while maintaining the shortest reported latency (3 μs) and good Dynamic Range (120 dB), and further reducing overall area (down to 30 × 31 μm per pixel). The only penalty is the limitation of intrascene Dynamic Range to 3 decades. A 128 × 128 DVS test prototype has been fabricated in standard 0.35 μm CMOS and extensive experimental characterization results are provided.) <|cite_end|> viewing MNIST examples moving on a computer monitor. However, this approach is also problematic because motion on a monitor is discontinuous, consisting of discrete jumps in position at each monitor update. These discontinuities are clearly visible in the data as shown later in Fig.~\ref{fig:FourierBernabe}. Furthermore, the MNIST-DVS dataset only converted a 10 000 sample subset of the 70 000 sample in MNIST, preventing Neuromorphic researchers from directly comparing their algorithms to Computer Vision using the same test and training splits. The MNIST-DVS examples have also been upscaled to 3 different scales, resulting in larger examples which are more computationally intensive to process than the smaller recordings we present.
Our approach to converting images uses static images on a computer monitor and instead moves the sensor itself, as described in Section~\ref{sec:conversion_process}. Our approach bears resemblance to retinal movements observed in primate and human experiments <|cite_start|> (Reference: Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception.: ) <|cite_end|>. These movements are subconscious, they are present even when trying to fixate on a point, and these movements are thought to play an important role in recognition in the primate visual system.
In the rest of this paper, we start off with describing our image conversion process in Section~\ref{sec:conversion_process} and using it to convert the MNIST and Caltech101 datasets. In Section~\ref{sec:Properties} we show examples of recordings and describe some of the properties of the recorded datasets. In Section~\ref{sec:Recognition} we briefly present recognition accuracies on the datasets using previously published algorithms before wrapping up with discussion in Section~\ref{sec:Discussion}. <|paper_end|> | [
"<|reference_start|> Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given \"frame rate.\" Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or \"temporal contrast.\" The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to \"reality.\" These events can be processed \"as they flow\" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules. <|reference_end|>",
"<|reference_start|> Real-Time Classification and Sensor Fusion with a Spiking Deep Belief Network: Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. <|reference_end|>",
"<|reference_start|> A 128$\\,\\times$ 128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers: Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and processing. They feature unique characteristics such as contrast coding under wide illumination variation, micro-second latency response to fast stimuli, and low output data rates (which greatly improves the efficiency of post-processing stages). They can track extremely fast objects (e.g., time resolution is better than 100 kFrames/s video) without special lighting conditions. Their availability has triggered a new range of vision applications in the fields of surveillance, motion analyses, robotics, and microscopic dynamic observations. One key DVS feature is contrast sensitivity, which has so far been reported to be in the 10-15% range. In this paper, a novel pixel photo sensing and transimpedance pre-amplification stage makes it possible to improve by one order of magnitude contrast sensitivity (down to 1.5%) and power (down to 4 mW), reduce the best reported FPN (Fixed Pattern Noise) by a factor of 2 (down to 0.9%), while maintaining the shortest reported latency (3 μs) and good Dynamic Range (120 dB), and further reducing overall area (down to 30 × 31 μm per pixel). The only penalty is the limitation of intrascene Dynamic Range to 3 decades. A 128 × 128 DVS test prototype has been fabricated in standard 0.35 μm CMOS and extensive experimental characterization results are provided. <|reference_end|>",
"<|reference_start|> Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception.: <|reference_end|>"
] | [
2,
10,
11,
12
] | {"<|cite_1|>": "ss-754421", "<|multi_cite_3_1|>": "arxiv-82095", "<|multi_cite_3_2|>": "ss-1104630", "<|cite_4|>": "ss-1056505", "<|cite_5|>": "ss-755909", "<|cite_6|>": "ss-1104630", "<|cite_7|>": "arxiv-82095", "<|cite_8|>": "ss-1104630", "<|cite_9|>": "ss-1084623", "<|cite_10|>": "ss-817591", "<|cite_11|>": "ss-897211", "<|cite_12|>": "ss-842991", "<|cite_13|>": "ss-1341342"} |
1911.01373 | <|paper_start|> Title: Gradient-based Adaptive Markov Chain Monte Carlo
Abstract: Gradient-based Adaptive Markov Chain Monte Carlo: We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when candidate state values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.
Introduction
\label{sec:introduction}
Markov chain Monte Carlo (MCMC) is a family of algorithms that provide a mechanism for generating dependent draws from arbitrarily complex distributions. The basic set up of an MCMC algorithm in any probabilistic (e.g.\ Bayesian) inference problem, with an intractable target
density $\pi(x)$,
is as follows. A discrete time Markov chain $\{X_t\}_{t=0}^\infty$ with transition kernel $P_\theta$, appropriately chosen from a collection of $\pi$-invariant kernels
$\{P_\theta(\cdot,\cdot)\}_{\theta \in \Theta}$, is generated and the ergodic averages
$
\mu_t(F) = t^{-1} \sum_{i=0}^{t-1} F(X_i)
$
are used as approximations to $E_\pi(F)$
for any real-valued function $F$.
Although in principle this sampling setup is simple, the actual implementation of any MCMC algorithm requires careful choice of $P_\theta$ because the properties of $\mu_t$ depend on $\theta$. In common kernels that lead to random walk Metropolis (RWM), Metropolis-adjusted Langevin (MALA) or Hamiltonian Monte Carlo (HMC) algorithms the kernels $P_\theta$ are specified through an accept-reject mechanism in which the chain moves from time $t$ to time $t+1$ by first proposing candidate values $y$ from a density $q_\theta(y|x)$ and accepting them with some probability $\alpha(x_t,y)$ and setting $x_{t+1}=y$, or rejecting them and setting $x_{t+1}=x_t$. Since $\theta$ directly affects this acceptance probability, it is clear that one should choose $\theta$ so that the chain does not move too slowly or rejects too many proposed values $y$ because in both these cases convergence to the stationary distribution will be slow. This has been recognised as early as in <|cite_start|> (Reference: Equation of State Calculations by Fast Computing Machines: ) <|cite_end|> and has initiated exciting research that has produced optimum average acceptance probabilities
for a series of algorithms; see <|cite_start|> (Reference: Weak convergence and optimal scaling of random walk Metropolis algorithms: This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to `. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain converges to the appropriate limiting Langevin diffusion process. The limiting diffusion approximation admits a straightforward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions. The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed.) <|cite_end|> <|cite_start|> (Reference: Optimal scaling of discrete approximations to Langevin diffusions: We consider the optimal scaling problem for proposal distributions in Hastings–Metropolis algorithms derived from Langevin diffusions. We prove an asymptotic diffusion limit theorem and show that the relative efficiency of the algorithm can be characterized by its overall acceptance rate, independently of the target distribution. The asymptotically optimal acceptance rate is 0.574. We show that, as a function of dimension n, the complexity of the algorithm is O(n1/3), which compares favourably with the O(n) complexity of random walk Metropolis algorithms. We illustrate this comparison with some example simulations.) <|cite_end|> <|cite_start|> (Reference: Optimal scaling for various Metropolis-Hastings algorithms: ) <|cite_end|> <|cite_start|> (Reference: Componentwise adaptation for high dimensional MCMC: ) <|cite_end|> <|cite_start|> (Reference: Weak convergence of Metropolis algorithms for non-i.i.d. target distributions: In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method for determining the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the well-known 0.234.) <|cite_end|> <|cite_start|> (Reference: Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234: ) <|cite_end|> <|cite_start|> (Reference: Examples of adaptive {MCMC: We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario, Saksman, and Tamminen (2001), Metropolis-within-Gibbs algorithms for nonconjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algorithms perform very well compared to nonadaptive algorithms, even in high dimension.) <|cite_end|> <|cite_start|> (Reference: Efficient sampling using metropolis algorithms: Applications of optimal scaling results: We recently considered the optimal scaling problem of Metropolis algorithms for multidimensional target distributions with non-IID components. The results that were proven have wide applications and the aim of this article is to show how practitioners can take advantage of them. In particular, we use several examples to illustrate the casewhere the asymptotically optimal acceptance rate is the usual 0.234, and also the latest developments where smaller acceptance rates should be adopted for optimal sampling from the target distributions involved. We study the impact of the proposal scaling on the performance of the algorithm, and finally perform simulation studies exploring the efficiency of the algorithm when sampling from some popular statistical models.) <|cite_end|> <|cite_start|> (Reference: Optimal proposal distributions and adaptive MCMC: We review recent work concerning optimal proposal scalings for Metropolis-Hastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly.) <|cite_end|> <|cite_start|> (Reference: Optimal tuning of the hybrid Monte Carlo algorithm: We investigate the properties of the Hybrid Monte Carlo algorithm (HMC) in high dimensions.
HMC develops a Markov chain reversible w.r.t. a given target distribution . by using separable Hamiltonian dynamics with potential -log .. The additional momentum variables are chosen at random from the Boltzmann distribution and the continuous-time Hamiltonian dynamics are then discretised using the leapfrog scheme. The induced bias is removed via a Metropolis-
Hastings accept/reject rule. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an O(1) acceptance probability as the dimension d of the state space tends to ., the leapfrog step-size h should be scaled as h=l ×d−1/
4 . Therefore, in high dimensions, HMC requires O(d1/
4 ) steps to traverse the state space. We also identify
analytically the asymptotically optimal acceptance probability, which turns out to be 0.651
(to three decimal places). This is the choice which optimally balances the cost of generating a
proposal, which decreases as l increases (because fewer steps are required to reach the desired final integration time), against the cost related to the average number of proposals required to obtain acceptance, which increases as l increases) <|cite_end|>.
Such optimal average acceptance probabilities provide basic guidelines for adapting single
step size parameters to achieve certain average acceptance rates.
More sophisticated adaptive MCMC algorithms that can learn a full set of parameters $\theta$, such as a covariance matrix,
borrow information from the history of the chain to optimise
some criterion reflecting the performance of the Markov chain <|cite_start|> (Reference: An {{Adaptive Metropolis Algorithm: A proper choice of a proposal distribution for Markov chain Monte Carlo methods, for example for the Metropolis-Hastings algorithm, is well known to be a crucial factor for the convergence of the algorithm. In this paper we introduce an adaptive Metropolis (AM) algorithm, where the Gaussian proposal distribution is updated along the process using the full information cumulated so far. Due to the adaptive nature of the process, the AM algorithm is non-Markovian, but we establish here that it has the correct ergodic properties. We also include the results of our numerical tests, which indicate that the AM algorithm competes well with traditional Metropolis-Hastings algorithms, and demonstrate that the AM algorithm is easy to use in practical computation.) <|cite_end|> <|cite_start|> (Reference: On adaptive Markov chain Monte Carlo algorithms: We look at adaptive Markov chain Monte Carlo algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the history of the process. We show under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution. We use this result to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter o is sequentially adapted using a Robbins Monro type algorithm in order to find the optimal scale parameter aopt. We close with a simulation example.) <|cite_end|> <|cite_start|> (Reference: {Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms: We consider basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions. We prove convergence in distribution and a weak law of large numbers. We also give counterexamples to demonstrate that the assumptions we make are not redundant.) <|cite_end|> <|cite_start|> (Reference: Adaptive independent metropolis-hastings by fast estimation of mixtures of normals: Adaptive Metropolis–Hastings samplers use information obtained from previous draws to tune the proposal distribution automatically and repeatedly. Adaptation needs to be done carefully to ensure convergence to the correct target distribution because the resulting chain is not Markovian. We construct an adaptive independent Metropolis–Hastings sampler that uses a mixture of normals as a proposal distribution. To take full advantage of the potential of adaptive sampling our algorithm updates the mixture of normals frequently, starting early in the chain. The algorithm is built for speed and reliability and its sampling performance is evaluated with real and simulated examples. Our article outlines conditions for adaptive sampling to hold. An online supplement to the article gives a proof of convergence and Gauss code to implement the algorithms.) <|cite_end|> <|cite_start|> (Reference: On the ergodicity properties of some adaptive MCMC algorithms: In this paper we study the ergodicity properties of some adaptive Markov chain Monte Carlo algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem. We prove that the conditions required are satisfied for the independent Metropolis–Hastings algorithm and the random walk Metropolis algorithm with symmetric increments. Finally, we propose an application of these results to the case where the proposal distribution of the Metropolis–Hastings update is a mixture of distributions from a curved exponential family.) <|cite_end|> <|cite_start|> (Reference: On the efficiency of adaptive MCMC algorithms: We study a class of adaptive Markov Chain Monte Carlo (MCMC) processes which aim at behaving as an "optimal" target process via a learning procedure. We show, under appropriate conditions, that the adaptive process and "optimal" (nonadaptive) MCMC algorithm share identical asymptotic properties. The special case of adaptive MCMC algorithms governed by stochastic approximation is considered in details and we apply our results to the adaptive Metropolis algorithm of [1]. We also propose a new class of adaptive MCMC algorithms, called quasi-perfect adaptive MCMC which possesses appealing theoretical and practical properties, as demonstrated through numerical simulations.) <|cite_end|> <|cite_start|> (Reference: Bayesian Time Series Models: Adaptive Markov chain Monte Carlo: theory and methods: In general, the transition probability P of the Markov chain depends on some tuning parameter θ defined on some space Θ which can be either finite dimensional or infinite dimensional. The success of the MCMC procedure depends crucially upon a proper choice of θ. To illustrate, consider the standard Metropolis-Hastings (MH) algorithm. For simplicity, we assume that π has a density also denoted by π with respect to the Lebesgue measure on X = R endowed with its Borel σ-field X . Given that the chain is at x, a candidate y is sampled from a proposal transition density q(x, ·) and is accepted with probability α(x, y) defined as) <|cite_end|>. Such methods are typically non gradient-based and the basic strategy they use
is to sequentially fit the proposal $q_\theta(y|x)$ to the history of states $x_{t-1}, x_t,\ldots,$ by ignoring also the
rejected state values. This can result in very slow adaptation because the initial Markov chain simulations
are based on poor initial $\theta$ and the generated states, from which $\theta$ is learnt,
are highly correlated and far from the target. The authors in <|cite_start|> (Reference: Examples of adaptive {MCMC: We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario, Saksman, and Tamminen (2001), Metropolis-within-Gibbs algorithms for nonconjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algorithms perform very well compared to nonadaptive algorithms, even in high dimension.) <|cite_end|> call such adaptive strategies `greedy' in the sense that they try to adapt too closely to initial information from the output and take considerable time to recover from misleading initial information.
In this paper, we develop faster and more robust gradient-based adaptive MCMC algorithms that make use of the gradient of the
target, $\nabla \log \pi(x)$,
and they learn from both actual states of the chain and proposed (and possibly rejected) states.
The key idea is to define and maximise w.r.t.\ $\theta$ an entropy regularised objective function that promotes
high acceptance rates and high values for the entropy of the proposal distribution. This objective function, referred to as generalised
speed measure, is inspired by the speed measure of the infinite-dimensional limiting diffusion process that
captures the notion of speed in which a Markov chain converges to its stationary distribution <|cite_start|> (Reference: Optimal scaling for various Metropolis-Hastings algorithms: ) <|cite_end|>.
We maximise this objective function by applying stochastic gradient variational inference techniques such as those
based on the reparametrisation trick <|cite_start|> (Reference: International Conference on Learning Representations (ICLR): ) <|cite_end|> <|cite_start|> (Reference: Proceedings of The 32nd International Conference on Machine Learning: ) <|cite_end|> <|cite_start|> (Reference: Proceedings of The 32nd International Conference on Machine Learning: ) <|cite_end|>.
An advantage of our algorithm compared to traditional adaptive MCMC methods is that the adaptation occurs even when candidate state values are rejected. In fact, the adaptation can be faster when candidate values $y$ are rejected since then we make always
full use of the gradient $\nabla \log \pi(y)$ evaluated at the rejected $y$. This allows
the adaptation to start in early iterations even if the initial proposal distribution is far from optimum
and the chain is not moving. We apply the method for learning multivariate RWM and MALA
proposals where we adapt full covariance matrices parametrised efficiently using Cholesky factors.
In the experiments we demonstrate our algorithms to multivariate Gaussian targets and Bayesian logistic regression and
empirically show that they outperform several other baselines, including advanced HMC schemes.
Related Work
\label{sec:related}
Connection of our method with traditional adaptive MCMC methods has been discussed in Section \ref{sec:introduction}.
Here, we analyse additional related works that make use of gradient-based optimisation
and specialised objective functions or algorithms to train MCMC proposal distributions.
The work in <|cite_start|> (Reference: Generalizing Hamiltonian Monte Carlo with Neural Networks: We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.) <|cite_end|> proposed a criterion to tune MCMC proposals
based on maximising a modified version of the expected squared jumped distance,
$\int q_{\theta}(y | x_t) ||y - x_t||^2 \alpha(x_t, y; \theta) d y$, previously considered in <|cite_start|> (Reference: Adaptively Scaling the Metropolis Algorithm Using Expected Squared Jumped Distance: Using existing theory on efficient jumping rules and on adaptive MCMC, we construct and demonstrate the effectiveness of a workable scheme for improving the efficiency of Metropolis algorithms. A good choice of the proposal distribution is crucial for the rapid convergence of the Metropolis algorithm. In this paper, given a family of parametric Markovian kernels, we develop an algorithm for optimizing the kernel by maximizing the expected squared jumped distance, an objective function that characterizes the Markov chain under its d-dimensional stationary distribution. The algorithm uses the information accumulated by a single path and adapts the choice of the parametric kernel in the direction of the local maximum of the objective function using multiple importance sampling techniques. We follow a two-stage approach: a series of adaptive optimization steps followed by an MCMC run with fixed kernel. It is not necessary for the adaptation itself to converge. Using several examples, we demonstrate the effectiveness of our method, even for cases in which the Metropolis transition kernel is initialized at very poor values.) <|cite_end|>.
Specifically, the authors in <|cite_start|> (Reference: Generalizing Hamiltonian Monte Carlo with Neural Networks: We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.) <|cite_end|> firstly observe that the expected squared jumped distance
may not encourage mixing across all dimensions of $x$\footnote{Because
the additive form of $||y - x_t||^2 = \sum_i (y_i - x_{t i})^2$ implies that even when some dimensions
might not be moving at all (the corresponding distance terms are zero),
the overall sum can still be large.} and then try to resolve this
by including a reciprocal term (see Section 4.2 in their paper). The
generalised speed measure proposed in this paper is rather different from such criteria
since it encourages joint exploration of all dimensions of $x$ by applying maximum entropy
regularisation, which by construction penalises "dimensions that do not move" since the entropy becomes minus infinity in such cases.
Another important difference is that in our method the optimisation
is performed in the log space by propagating gradients through the logarithm of the
M-H acceptance probability, i.e.\ through $\log \alpha(x_t, y; \theta)$ and not through $\alpha(x_t, y; \theta)$. This is exactly
analogous to other numerically stable objectives such as variational lower bounds and log likelihoods, and as those
our method leads to numerically stable optimisation for arbitrarily large dimensionality of $x$
and complex targets $\pi(x)$.
In another related work, the authors in <|cite_start|> (Reference: Metropolis-Hastings view on variational inference and adversarial training: A significant part of MCMC methods can be considered as the Metropolis-Hastings (MH) algorithm with different proposal distributions. From this point of view, the problem of constructing a sampler can be reduced to the question - how to choose a proposal for the MH algorithm? To address this question, we propose to learn an independent sampler that maximizes the acceptance rate of the MH algorithm, which, as we demonstrate, is highly related to the conventional variational inference. For Bayesian inference, the proposed method compares favorably against alternatives to sample from the posterior distribution. Under the same approach, we step beyond the scope of classical MCMC methods and deduce the Generative Adversarial Networks (GANs) framework from scratch, treating the generator as the proposal and the discriminator as the acceptance test. On real-world datasets, we improve Frechet Inception Distance and Inception Score, using different GANs as a proposal distribution for the MH algorithm. In particular, we demonstrate improvements of recently proposed BigGAN model on ImageNet.) <|cite_end|> considered minimising the
KL divergence $\text{KL}[\pi(x_t) q_{\theta}(y_t | x_t) || \pi(y_t) q_{\theta} (x_t | y_t) ]$.
However, this loss for standard proposal schemes, such as RWM and MALA,
leads to degenerate deterministic solutions where $q_{\theta}(y_t | x_t)$ collapses to a delta function.
Therefore, <|cite_start|> (Reference: Metropolis-Hastings view on variational inference and adversarial training: A significant part of MCMC methods can be considered as the Metropolis-Hastings (MH) algorithm with different proposal distributions. From this point of view, the problem of constructing a sampler can be reduced to the question - how to choose a proposal for the MH algorithm? To address this question, we propose to learn an independent sampler that maximizes the acceptance rate of the MH algorithm, which, as we demonstrate, is highly related to the conventional variational inference. For Bayesian inference, the proposed method compares favorably against alternatives to sample from the posterior distribution. Under the same approach, we step beyond the scope of classical MCMC methods and deduce the Generative Adversarial Networks (GANs) framework from scratch, treating the generator as the proposal and the discriminator as the acceptance test. On real-world datasets, we improve Frechet Inception Distance and Inception Score, using different GANs as a proposal distribution for the MH algorithm. In particular, we demonstrate improvements of recently proposed BigGAN model on ImageNet.) <|cite_end|> maximised this objective for the independent M-H sampler
where the collapsing problem does not occur. The entropy regularised objective we introduced is different
and it can adapt arbitrary MCMC proposal distributions, and not just the independent M-H sampler.
There has been also work to learn flexible MCMC proposals using neural
networks <|cite_start|> (Reference: A-NICE-MC: Adversarial Training for MCMC: Existing Markov Chain Monte Carlo (MCMC) methods are either based on general-purpose and domain-agnostic schemes which can lead to slow convergence, or hand-crafting of problem-specific proposals by an expert. We propose A-NICE-MC, a novel method to train flexible parametric Markov chain kernels to produce samples with desired properties. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. A-NICE-MC provides the first framework to automatically design efficient domain-specific MCMC proposals. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.) <|cite_end|> <|cite_start|> (Reference: Generalizing Hamiltonian Monte Carlo with Neural Networks: We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.) <|cite_end|> <|cite_start|> (Reference: Auxiliary Variational MCMC: ) <|cite_end|> <|cite_start|> (Reference: Proceedings of The 32nd International Conference on Machine Learning: ) <|cite_end|>.
For instance, <|cite_start|> (Reference: A-NICE-MC: Adversarial Training for MCMC: Existing Markov Chain Monte Carlo (MCMC) methods are either based on general-purpose and domain-agnostic schemes which can lead to slow convergence, or hand-crafting of problem-specific proposals by an expert. We propose A-NICE-MC, a novel method to train flexible parametric Markov chain kernels to produce samples with desired properties. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. A-NICE-MC provides the first framework to automatically design efficient domain-specific MCMC proposals. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.) <|cite_end|> use volume preserving flows and
an adversarial objective, <|cite_start|> (Reference: Generalizing Hamiltonian Monte Carlo with Neural Networks: We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.) <|cite_end|> use the modified
expected jumped distance, discussed earlier, to learn neural network-based
extensions of HMC, while <|cite_start|> (Reference: Auxiliary Variational MCMC: ) <|cite_end|> <|cite_start|> (Reference: Proceedings of The 32nd International Conference on Machine Learning: ) <|cite_end|> use auxiliary variational inference.
The need to train neural networks can add a significant computational cost,
and from the end-user point of view these neural adaptive samplers might be hard to tune especially in high dimensions.
Notice that
the generalised speed measure we proposed in this paper could possibly
be used to train neural adaptive samplers as well. However, to really
obtain practical algorithms we need to ensure that training has small cost that does not
overwhelm the possible benefits in terms of effective sample size.
Finally, the generalised speed measure that is based on
entropy regularisation shares similarities with other used objectives for learning
probability distributions, such as in variational Bayesian inference, where the variational lower bound
includes an entropy term <|cite_start|> (Reference: Bayesian structure learning in graphical models: ) <|cite_end|> <|cite_start|> (Reference: Pattern Recognition and Machine Learning (Information Science and Statistics): ) <|cite_end|> and reinforcement learning (RL) where
maximum-entropy regularised policy gradients are able to
estimate more explorative policies <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> <|cite_start|> (Reference: Asynchronous Methods for Deep Reinforcement Learning: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.) <|cite_end|>. Further discussion on
the resemblance of our algorithm with RL is given in the Supplement.
\vspace{-2mm} <|paper_end|> | [
"<|reference_start|> Weak convergence and optimal scaling of random walk Metropolis algorithms: This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to `. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain converges to the appropriate limiting Langevin diffusion process. The limiting diffusion approximation admits a straightforward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions. The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed. <|reference_end|>",
"<|reference_start|> Efficient sampling using metropolis algorithms: Applications of optimal scaling results: We recently considered the optimal scaling problem of Metropolis algorithms for multidimensional target distributions with non-IID components. The results that were proven have wide applications and the aim of this article is to show how practitioners can take advantage of them. In particular, we use several examples to illustrate the casewhere the asymptotically optimal acceptance rate is the usual 0.234, and also the latest developments where smaller acceptance rates should be adopted for optimal sampling from the target distributions involved. We study the impact of the proposal scaling on the performance of the algorithm, and finally perform simulation studies exploring the efficiency of the algorithm when sampling from some popular statistical models. <|reference_end|>",
"<|reference_start|> International Conference on Learning Representations (ICLR): <|reference_end|>",
"<|reference_start|> Auxiliary Variational MCMC: <|reference_end|>"
] | [
1,
8,
20,
34
] | {"<|cite_10|>": "ss-762402", "<|multi_cite_11_1|>": "ss-2332073", "<|multi_cite_11_2|>": "ss-1533584", "<|multi_cite_11_3|>": "ss-773846", "<|multi_cite_11_4|>": "ss-2190353", "<|multi_cite_11_5|>": "ss-866256", "<|multi_cite_11_6|>": "ss-773841", "<|multi_cite_11_7|>": "ss-1327237", "<|multi_cite_11_8|>": "ss-2190354", "<|multi_cite_11_9|>": "ss-1718801", "<|multi_cite_11_10|>": "ss-2494047", "<|multi_cite_12_1|>": "ss-1592761", "<|multi_cite_12_2|>": "ss-1200577", "<|multi_cite_12_3|>": "ss-1849100", "<|multi_cite_12_4|>": "ss-2190355", "<|multi_cite_12_5|>": "ss-1200573", "<|multi_cite_12_6|>": "ss-2190356", "<|multi_cite_12_7|>": "ss-1200578", "<|cite_1|>": "ss-1327237", "<|cite_13|>": "ss-773846", "<|multi_cite_14_1|>": "ss-1058583", "<|multi_cite_14_2|>": "ss-683696", "<|multi_cite_14_3|>": "ss-683696", "<|cite_2|>": "arxiv-141208", "<|cite_3|>": "ss-1528325", "<|cite_4|>": "arxiv-141208", "<|cite_5|>": "arxiv-176501", "<|cite_6|>": "arxiv-176501", "<|multi_cite_15_1|>": "arxiv-127496", "<|multi_cite_15_2|>": "arxiv-141208", "<|multi_cite_15_3|>": "ss-1101897", "<|multi_cite_15_4|>": "ss-683696", "<|cite_7|>": "arxiv-127496", "<|cite_8|>": "arxiv-141208", "<|multi_cite_9_1|>": "ss-1101897", "<|multi_cite_9_2|>": "ss-683696", "<|multi_cite_16_1|>": "ss-1284906", "<|multi_cite_16_2|>": "ss-913189", "<|multi_cite_17_1|>": "arxiv-73321", "<|multi_cite_17_2|>": "arxiv-91622"} |
2401.02656-0 | <|paper_start|> Title: GTA: Guided Transfer of Spatial Attention from Object-Centric Representations
Abstract: GTA: Guided Transfer of Spatial Attention from Object-Centric Representations: Utilizing well-trained representations in transfer learning often results in superior performance and faster convergence compared to training from scratch. However, even if such good representations are transferred, a model can easily overfit the limited training dataset and lose the valuable properties of the transferred representations. This phenomenon is more severe in ViT due to its low inductive bias. Through experimental analysis using attention maps in ViT, we observe that the rich representations deteriorate when trained on a small dataset. Motivated by this finding, we propose a novel and simple regularization method for ViT called Guided Transfer of spatial Attention (GTA). Our proposed method regularizes the self-attention maps between the source and target models. A target model can fully exploit the knowledge related to object localization properties through this explicit regularization. Our experimental results show that the proposed GTA consistently improves the accuracy across five benchmark datasets especially when the number of training data is small.
Introduction
\label{sec:intro}
The Vision Transformer (ViT) has demonstrated impressive performance in a variety of computer vision tasks such as image classification <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|> <|cite_start|> (Reference: Going deeper with Image Transformers: Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models.) <|cite_end|> <|cite_start|> (Reference: DeiT III: Revenge of the ViT: A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.) <|cite_end|> <|cite_start|> (Reference: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{https://github.com/microsoft/Swin-Transformer}.) <|cite_end|> <|cite_start|> (Reference: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet: Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, e.g., the Vision Transformer (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. We find it is because: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines among neighboring pixels, leading to low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-ViT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study. Notably, T2T-ViT reduces the parameter count and MACs of vanilla ViT by half, while achieving more than 3.0\% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. For example, T2T-ViT with comparable size to ResNet50 (21.5M parameters) can achieve 83.3\% top1 accuracy in image resolution 384$\times$384 on ImageNet. (Code: https://github.com/yitu-opensource/T2T-ViT)) <|cite_end|> <|cite_start|> (Reference: Swin Transformer V2: Scaling Up Capacity and Resolution: Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.) <|cite_end|>, segmentation <|cite_start|> (Reference: DeiT III: Revenge of the ViT: A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.) <|cite_end|> <|cite_start|> (Reference: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{https://github.com/microsoft/Swin-Transformer}.) <|cite_end|> <|cite_start|> (Reference: Swin Transformer V2: Scaling Up Capacity and Resolution: Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.) <|cite_end|> <|cite_start|> (Reference: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet: Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, e.g., the Vision Transformer (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. We find it is because: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines among neighboring pixels, leading to low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-ViT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study. Notably, T2T-ViT reduces the parameter count and MACs of vanilla ViT by half, while achieving more than 3.0\% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. For example, T2T-ViT with comparable size to ResNet50 (21.5M parameters) can achieve 83.3\% top1 accuracy in image resolution 384$\times$384 on ImageNet. (Code: https://github.com/yitu-opensource/T2T-ViT)) <|cite_end|>, object detection <|cite_start|> (Reference: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{https://github.com/microsoft/Swin-Transformer}.) <|cite_end|> <|cite_start|> (Reference: Swin Transformer V2: Scaling Up Capacity and Resolution: Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.) <|cite_end|> <|cite_start|> (Reference: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet: Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, e.g., the Vision Transformer (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. We find it is because: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines among neighboring pixels, leading to low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-ViT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study. Notably, T2T-ViT reduces the parameter count and MACs of vanilla ViT by half, while achieving more than 3.0\% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. For example, T2T-ViT with comparable size to ResNet50 (21.5M parameters) can achieve 83.3\% top1 accuracy in image resolution 384$\times$384 on ImageNet. (Code: https://github.com/yitu-opensource/T2T-ViT)) <|cite_end|>, and image generation <|cite_start|> (Reference: {Generative Pretraining From Pixels: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. An even larger model trained on a mix-ture of ImageNet and web images is competitive with self-supervised benchmarks on ImageNet, achieving 72.0% top-1 accuracy on a linear probe of our features.) <|cite_end|> <|cite_start|> (Reference: Image Transformer: Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.) <|cite_end|> <|cite_start|> (Reference: StyleSwin: Transformer-based GAN for High-resolution Image Generation: Despite the tantalizing success in a broad of vision tasks, transformers have not yet demonstrated on-par ability as ConvNets in high-resolution image generative modeling. In this paper, we seek to explore using pure transformers to build a generative adversarial network for high-resolution image synthesis. To this end, we believe that local attention is crucial to strike the balance between computational efficiency and modeling capacity. Hence, the proposed generator adopts Swin transformer in a style-based architecture. To achieve a larger receptive field, we propose double attention which simultaneously leverages the context of the local and the shifted windows, leading to improved generation quality. Moreover, we show that offering the knowledge of the absolute position that has been lost in window-based transformers greatly benefits the generation quality. The proposed StyleSwin is scalable to high resolutions, with both the coarse geometry and fine structures benefit from the strong expressivity of transformers. However, blocking artifacts occur during high-resolution synthesis because performing the local attention in a block-wise manner may break the spatial coherency. To solve this, we empirically investigate various solutions, among which we find that employing a wavelet discriminator to examine the spectral discrepancy effectively suppresses the artifacts. Extensive experiments show the superiority over prior transformer-based GANs, especially on high resolutions, e.g., 1024x1024. The StyleSwin, without complex training strategies, excels over StyleGAN on CelebA-HQ 1024, and achieves on-par performance on FFHQ-1024, proving the promise of using transformers for high-resolution image generation. The code and models will be available at https://github.com/microsoft/StyleSwin.) <|cite_end|>, surpassing traditional convolutional neural networks (CNNs). Unlike CNNs that rely entirely on convolution operations which are designed to capture locality, neighborhood structure, and translation equivariance, only the multi-layer perceptron (MLP) component in ViT is responsible for learning those characteristics. The main difference between ViT and CNNs is the self-attention mechanism in the multi-head self-attention (MSA) layer, which globally aggregates spatial features from input tokens with normalized importance <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|>. ViT is known to have a lower inductive bias compared to CNNs, meaning that it requires more training data to obtain a well-performing model. As a result, when the available training data is limited, ViT generally shows lower performance than CNNs <|cite_start|> (Reference: Vision Transformer for Small-Size Datasets: Recently, the Vision Transformer (ViT), which applied the transformer structure to the image classification task, has outperformed convolutional neural networks. However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias. This paper proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA), which effectively solve the lack of locality inductive bias and enable it to learn from scratch even on small-size datasets. Moreover, SPT and LSA are generic and effective add-on modules that are easily applicable to various ViTs. Experimental results show that when both SPT and LSA were applied to the ViTs, the performance improved by an average of 2.96% in Tiny-ImageNet, which is a representative small-size dataset. Especially, Swin Transformer achieved an overwhelming performance improvement of 4.08% thanks to the proposed SPT and LSA.) <|cite_end|>. In a recent study <|cite_start|> (Reference: How Do Vision Transformers Work?: The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; (2) MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; (3) Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes. The code is available at https://github.com/xxxnell/how-do-vits-work.) <|cite_end|>, the authors argued that MSA has both advantages and disadvantages. The advantage is its ability to flatten the loss landscape, which can improve accuracy and robustness in large data regimes. On the other hand, the disadvantage is that MSA allows the negative Hessian eigenvalues when trained on limited training data. These negative Hessian eigenvalues can lead to a non-convex loss landscape, which can disturb model training. The study also demonstrated that self-attention can be interpreted as a \textit{large-sized} and \textit{data-specific} spatial kernel <|cite_start|> (Reference: How Do Vision Transformers Work?: The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; (2) MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; (3) Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes. The code is available at https://github.com/xxxnell/how-do-vits-work.) <|cite_end|>.
\begin{figure}[t!]
\centering
\includegraphics[width=8.3cm]{images/intro_fig/intro_figure.png}
\caption{\textbf{Comparison of self-attention maps from pre-trained, na\"{\i}vely fine-tuned, and GTA-traind models.} The self-attention maps of the multiple heads are aggregated with max values, and visualized in red color. Each column shows the attention maps from the models that are pre-trained, fine-tuned, and fine-tuned with GTA on 15\% and 100\% of training data, respectively. GTA shows that it is capable of fully leveraging well-trained representations learned by the upstream task.}
\vspace{-10pt}
\label{fig1_intro}
\end{figure}
When training data is scarce, transfer learning (TL) has been considered as the de-facto paradigm in practice. Pre-trained models, which have been trained with large-scale datasets, have enabled faster training and high generalization performance in TL scenarios. Various TL techniques have been proposed to effectively learn target tasks by utilizing well-trained representations transferred from pre-trained models <|cite_start|> (Reference: Explicit Inductive Bias for Transfer Learning with Convolutional Networks: In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We show the benefit of having an explicit inductive bias towards the initial model, and we eventually recommend a simple $L^2$ penalty with the pre-trained model being a reference as the baseline of penalty for transfer learning tasks.) <|cite_end|> <|cite_start|> (Reference: Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning: Before sufficient training data is available, fine-tuning neural networks pre-trained on large-scale datasets substantially outperforms training from random initialization. However, fine-tuning methods suffer from two dilemmas, catastrophic forgetting and negative transfer. While several methods with explicit attempts to overcome catastrophic forgetting have been proposed, negative transfer is rarely delved into. In this paper, we launch an in-depth empirical investigation into negative transfer in fine-tuning and find that, for the weight parameters and feature representations, transferability of their spectral components is diverse. For safe transfer learning, we present Batch Spectral Shrinkage (BSS), a novel regularization approach to penalizing smaller singular values so that untransferable spectral components are suppressed. BSS is orthogonal to existing fine-tuning methods and is readily pluggable to them. Experimental results show that BSS can significantly enhance the performance of representative methods, especially with limited training data.) <|cite_end|> <|cite_start|> (Reference: Co-tuning for transfer learning: Fine-tuning pre-trained deep neural networks (DNNs) to a target dataset, also known as transfer learning, is widely used in computer vision and NLP. Because task-specific layers mainly contain categorical information and categories vary with datasets, practitioners only partially transfer pre-trained models by discarding task-specific layers and fine-tuning bottom layers. However, it is a reckless loss to simply discard task-specific parameters which take up as many as 20% of the total parameters in pre-trained models. To fully transfer pre-trained models, we propose a two-step framework named Co-Tuning : (i) learn the relationship between source categories and target categories from the pre-trained model with calibrated predictions; (ii) target labels (one-hot labels), as well as source labels (probabilistic labels) translated by the category relationship, collaboratively supervise the fine-tuning process. A simple instantiation of the framework shows strong empirical results in four visual classification tasks and one NLP classification task, bringing up to 20% relative improvement. While state-of-the-art fine-tuning techniques mainly focus on how to impose regularization when data are not abundant, Co-Tuning works not only in medium-scale datasets (100 samples per class) but also in large-scale datasets (1000 samples per class) where regularization-based methods bring no gains over the vanilla fine-tuning. Co-Tuning relies on a typically valid assumption that the pre-trained dataset is diverse enough, implying its broad application areas.) <|cite_end|> <|cite_start|> (Reference: Three things everyone should know about Vision Transformers: After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.) <|cite_end|>. Recently, self-supervised learning (SSL) has emerged as a promising approach for learning visual representations without using class labels. SSL allows to obtain domain-specific representations by training an unlabeled large-scale dataset related to the target domain of interest, e.g., SSL on large-scale medical images <|cite_start|> (Reference: Big Self-Supervised Models Advance Medical Image Classification: Self-supervised pretraining followed by supervised fine-tuning has seen success in image recognition, especially when labeled examples are scarce, but has received limited attention in medical image analysis. This paper studies the effectiveness of self-supervised learning as a pretraining strategy for medical image classification. We conduct experiments on two distinct tasks: dermatology skin condition classification from digital camera images and multi-label chest X-ray classification, and demonstrate that self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled domain-specific medical images significantly improves the accuracy of medical image classifiers. We introduce a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case, when available, to construct more informative positive pairs for self-supervised learning. Combining our contributions, we achieve an improvement of 6.7% in top-1 accuracy and an improvement of 1.1% in mean AUC on dermatology and chest X-ray classification respectively, outperforming strong supervised baselines pretrained on ImageNet. In addition, we show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images.) <|cite_end|>. With this advantage, SSL can serve as a powerful alternative to supervised learning (SL) to address the domain discrepancies in various TL scenarios. The ViT architecture has recently proven advantageous for SSL due to its ability to fully leverage large-scale datasets. In particular, some studies have shown high TL performance by utilizing accurate object-centric representation features, which can also be helpful for semantic segmentation <|cite_start|> (Reference: Emerging Properties in Self-Supervised Vision Transformers: In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.) <|cite_end|> <|cite_start|> (Reference: Image BERT Pre-training with Online Tokenizer: ) <|cite_end|>.
When applying commonly used TL techniques to ViT, the object-centric representations from well-trained models may deteriorate. We experimentally confirmed that the quality of well-trained features deteriorates after fine-tuning based on the visualization of self-attention maps from na\"{\i}vely fine-tuned ViT models, and assessed the influence of the amount of training data (see Figure~\ref{fig1_intro}). Through the self-attention maps, we can visually see which image tokens are particularly attended to perform the target task. As shown in Figure~\ref{fig1_intro}, the visualization results indicate that ViT trained with basic fine-tuning tends to learn shortcuts, e.g., the features corresponding to the background (i.e., non-object area). Such shortcut learning is an undesirable behavior due to the correlation between objects and background in few-shot settings, which hinders generalization <|cite_start|> (Reference: Rectify ViT Shortcut Learning by Visual Saliency: Shortcut learning is common but harmful to deep learning models, leading to degenerated feature representations and consequently jeopardizing the model's generalizability and interpretability. However, shortcut learning in the widely used Vision Transformer framework is largely unknown. Meanwhile, introducing domain-specific knowledge is a major approach to rectifying the shortcuts, which are predominated by background related factors. For example, in the medical imaging field, eye-gaze data from radiologists is an effective human visual prior knowledge that has the great potential to guide the deep learning models to focus on meaningful foreground regions of interest. However, obtaining eye-gaze data is time-consuming, labor-intensive and sometimes even not practical. In this work, we propose a novel and effective saliency-guided vision transformer (SGT) model to rectify shortcut learning in ViT with the absence of eye-gaze data. Specifically, a computational visual saliency model is adopted to predict saliency maps for input image samples. Then, the saliency maps are used to distil the most informative image patches. In the proposed SGT, the self-attention among image patches focus only on the distilled informative ones. Considering this distill operation may lead to global information lost, we further introduce, in the last encoder layer, a residual connection that captures the self-attention across all the image patches. The experiment results on four independent public datasets show that our SGT framework can effectively learn and leverage human prior knowledge without eye gaze data and achieves much better performance than baselines. Meanwhile, it successfully rectifies the harmful shortcut learning and significantly improves the interpretability of the ViT model, demonstrating the promise of transferring human prior knowledge derived visual saliency in rectifying shortcut learning) <|cite_end|> <|cite_start|> (Reference: Rectifying the Shortcut Learning of Background for Few-Shot Learning: The category gap between training and evaluation has been characterised as one of the main obstacles to the success of Few-Shot Learning (FSL). In this paper, we for the first time empirically identify image background, common in realistic images, as a shortcut knowledge helpful for in-class classification but ungeneralizable beyond training categories in FSL. A novel framework, COSOC, is designed to tackle this problem by extracting foreground objects in images at both training and evaluation without any extra supervision. Extensive experiments carried on inductive FSL tasks demonstrate the effectiveness of our approaches.) <|cite_end|>. Even with a relatively sufficient amount of training data, ViT still focuses on non-object regions due to its low inductive bias. Motivated by this observation, we hypothesize that TL performance can be improved if we can prevent the degradation of attention quality of pre-trained SSL models.
In this paper, to address this issue, we propose the Guided Transfer of spatial Attention (GTA) method, which effectively leverages pre-trained knowledge containing discriminative attention to enhance the TL performance of ViT, even with the limited size of the training dataset. Specifically, we explicitly regularize the self-attention logits of a downstream network (i.e., a target network) through a simple squared $L_2$ distance. Using various benchmark datasets, we compare our proposed GTA with existing TL methods including a method specifically designed for ViT <|cite_start|> (Reference: Three things everyone should know about Vision Transformers: After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.) <|cite_end|>to demonstrate its superiority over comparison targets.
To evaluate the effectiveness and importance of guiding self-attention, we compare the performance of guiding other output features from ViT, e.g., outputs of MSA layers or transformer blocks. In addition, we experimentally evaluate whether we can expect a performance boost when GTA is used in conjunction with TransMix <|cite_start|> (Reference: TransMix: Attend to Mix for Vision Transformers: Mixup-based augmentation has been found to be effective for generalizing models during training, especially for Vision Transformers (ViTs) since they can easily overfit. However, previous mixup-based methods have an underlying prior knowledge that the linearly interpolated ratio of targets should be kept the same as the ratio proposed in input interpolation. This may lead to a strange phenomenon that sometimes there is no valid object in the mixed image due to the random process in augmentation but there is still response in the label space. To bridge such gap between the input and label spaces, we propose TransMix, which mixes labels based on the attention maps of Vision Transformers. The confidence of the label will be larger if the corresponding input image is weighted higher by the attention map. TransMix is embarrassingly simple and can be implemented in just a few lines of code without introducing any extra parameters and FLOPs to ViT-based models. Experimental results show that our method can consistently improve various ViT-based models at scales on ImageNet classification. After pre-trained with TransMix on ImageNet, the ViT-based models also demonstrate better transferability to semantic segmentation, object detection and instance segmentation. TransMix also exhibits to be more robust when evaluating on 4 different benchmarks. Code will be made publicly available at https://github.com/Beckschen/TransMix.) <|cite_end|>, a label-mixing augmentation method specifically designed for ViT based on attention scores. It differs from Mixup <|cite_start|> (Reference: mixup: Beyond Empirical Risk Minimization: Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.) <|cite_end|>and CutMix <|cite_start|> (Reference: CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features: Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch .) <|cite_end|>which determine augmented labels based on randomly sampled mixing coefficients between two images. Finally, we evaluate the factors that may affect the performance of GTA including the use of SL as a guide model.
Our main contribution can be summarized as follows:
\begin{itemize}
\item We propose a simple yet effective TL technique for ViT named GTA. Our proposed GTA effectively improves performance by explicitly guiding one of the MSA components, self-attention logits.
\item We demonstrate that as the amount of training data decreases, the likelihood of self-attention deviating from the pre-trained model and concentrating on non-object regions increases. Our experimental results show the critical importance of guiding self-attention during ViT training in TL settings, especially when the amount of training data is limited.
\end{itemize}
Related Work
\begin{figure*}[h!]
\centering
\includegraphics[width=14.0cm]{images/main_method_fig/method_figure.png}
\caption{\textbf{The overall pipeline of the proposed GTA.} An image is first fed into both the frozen source model and the trainable target model. By minimizing the $L_2$ distance between the attention logits from each model, the target model is optimized for the current task while focusing on the image tokens that require attention by exploiting the source model.}
\vspace{-10pt}
\label{fig_method}
\end{figure*}
\paragraph{Transfer learning.}
TL is the most common and popular method in deep learning that can be applied to various downstream tasks <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|>. It not only improves performance but also ensures fast convergence of training by utilizing pre-trained models <|cite_start|> (Reference: Rethinking ImageNet Pre-training: We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data---a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of `pre-training and fine-tuning' in computer vision.) <|cite_end|>. Some studies have proposed methods to exploit the pre-trained knowledge and improve performance by regularizing features <|cite_start|> (Reference: DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks: Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP. The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.) <|cite_end|> <|cite_start|> (Reference: Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning: Before sufficient training data is available, fine-tuning neural networks pre-trained on large-scale datasets substantially outperforms training from random initialization. However, fine-tuning methods suffer from two dilemmas, catastrophic forgetting and negative transfer. While several methods with explicit attempts to overcome catastrophic forgetting have been proposed, negative transfer is rarely delved into. In this paper, we launch an in-depth empirical investigation into negative transfer in fine-tuning and find that, for the weight parameters and feature representations, transferability of their spectral components is diverse. For safe transfer learning, we present Batch Spectral Shrinkage (BSS), a novel regularization approach to penalizing smaller singular values so that untransferable spectral components are suppressed. BSS is orthogonal to existing fine-tuning methods and is readily pluggable to them. Experimental results show that BSS can significantly enhance the performance of representative methods, especially with limited training data.) <|cite_end|>. DELTA measures the importance of feature channels in the CNN model and regularizes the channels far from the pre-trained activations to leverage the transferred knowledge <|cite_start|> (Reference: DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks: Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP. The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.) <|cite_end|>. BSS shows that small eigenvalues of transfer features cause negative transfer, and penalizing small eigenvalues during TL to suppress untransferable spectral components can improve performance <|cite_start|> (Reference: Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning: Before sufficient training data is available, fine-tuning neural networks pre-trained on large-scale datasets substantially outperforms training from random initialization. However, fine-tuning methods suffer from two dilemmas, catastrophic forgetting and negative transfer. While several methods with explicit attempts to overcome catastrophic forgetting have been proposed, negative transfer is rarely delved into. In this paper, we launch an in-depth empirical investigation into negative transfer in fine-tuning and find that, for the weight parameters and feature representations, transferability of their spectral components is diverse. For safe transfer learning, we present Batch Spectral Shrinkage (BSS), a novel regularization approach to penalizing smaller singular values so that untransferable spectral components are suppressed. BSS is orthogonal to existing fine-tuning methods and is readily pluggable to them. Experimental results show that BSS can significantly enhance the performance of representative methods, especially with limited training data.) <|cite_end|>. Another method of exploiting prior knowledge is weight-based regularization, which controls the weight changes during downstream training <|cite_start|> (Reference: Explicit Inductive Bias for Transfer Learning with Convolutional Networks: In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We show the benefit of having an explicit inductive bias towards the initial model, and we eventually recommend a simple $L^2$ penalty with the pre-trained model being a reference as the baseline of penalty for transfer learning tasks.) <|cite_end|>. $L_2$ regularization penalizes changes in model weights, and $L_2$-SP utilizes $L_2$ constraints on the weights by using the pre-trained model as the starting point to leverage the learned inductive bias <|cite_start|> (Reference: Explicit Inductive Bias for Transfer Learning with Convolutional Networks: In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We show the benefit of having an explicit inductive bias towards the initial model, and we eventually recommend a simple $L^2$ penalty with the pre-trained model being a reference as the baseline of penalty for transfer learning tasks.) <|cite_end|>. Co-tuning <|cite_start|> (Reference: Co-tuning for transfer learning: Fine-tuning pre-trained deep neural networks (DNNs) to a target dataset, also known as transfer learning, is widely used in computer vision and NLP. Because task-specific layers mainly contain categorical information and categories vary with datasets, practitioners only partially transfer pre-trained models by discarding task-specific layers and fine-tuning bottom layers. However, it is a reckless loss to simply discard task-specific parameters which take up as many as 20% of the total parameters in pre-trained models. To fully transfer pre-trained models, we propose a two-step framework named Co-Tuning : (i) learn the relationship between source categories and target categories from the pre-trained model with calibrated predictions; (ii) target labels (one-hot labels), as well as source labels (probabilistic labels) translated by the category relationship, collaboratively supervise the fine-tuning process. A simple instantiation of the framework shows strong empirical results in four visual classification tasks and one NLP classification task, bringing up to 20% relative improvement. While state-of-the-art fine-tuning techniques mainly focus on how to impose regularization when data are not abundant, Co-Tuning works not only in medium-scale datasets (100 samples per class) but also in large-scale datasets (1000 samples per class) where regularization-based methods bring no gains over the vanilla fine-tuning. Co-Tuning relies on a typically valid assumption that the pre-trained dataset is diverse enough, implying its broad application areas.) <|cite_end|>has shown impressive performance improvements by exploiting the label relationship between the upstream and downstream tasks. However, in this work, to ensure ease of implementation and scalability, we only focus on methods that do not require additional data <|cite_start|> (Reference: Co-tuning for transfer learning: Fine-tuning pre-trained deep neural networks (DNNs) to a target dataset, also known as transfer learning, is widely used in computer vision and NLP. Because task-specific layers mainly contain categorical information and categories vary with datasets, practitioners only partially transfer pre-trained models by discarding task-specific layers and fine-tuning bottom layers. However, it is a reckless loss to simply discard task-specific parameters which take up as many as 20% of the total parameters in pre-trained models. To fully transfer pre-trained models, we propose a two-step framework named Co-Tuning : (i) learn the relationship between source categories and target categories from the pre-trained model with calibrated predictions; (ii) target labels (one-hot labels), as well as source labels (probabilistic labels) translated by the category relationship, collaboratively supervise the fine-tuning process. A simple instantiation of the framework shows strong empirical results in four visual classification tasks and one NLP classification task, bringing up to 20% relative improvement. While state-of-the-art fine-tuning techniques mainly focus on how to impose regularization when data are not abundant, Co-Tuning works not only in medium-scale datasets (100 samples per class) but also in large-scale datasets (1000 samples per class) where regularization-based methods bring no gains over the vanilla fine-tuning. Co-Tuning relies on a typically valid assumption that the pre-trained dataset is diverse enough, implying its broad application areas.) <|cite_end|>or pre-processing steps for training <|cite_start|> (Reference: DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks: Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in an supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L2 and L2-SP. The experiment results show that our proposed method outperforms these baselines with higher accuracy for new tasks.) <|cite_end|>. While many studies on TL have focused on CNNs, it is shown that fine-tuning only the MSA layers can improve performance compared to full fine-tuning <|cite_start|> (Reference: Three things everyone should know about Vision Transformers: After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.) <|cite_end|>.
\paragraph{Self-supervised learning.}
SSL has received considerable attention due to its ability to learn meaningful representations without requiring human annotations <|cite_start|> (Reference: Momentum Contrast for Unsupervised Visual Representation Learning: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.) <|cite_end|> <|cite_start|> (Reference: A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.) <|cite_end|> <|cite_start|> (Reference: An Empirical Study of Training Self-Supervised Vision Transformers: This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.) <|cite_end|> <|cite_start|> (Reference: Bootstrap Your Own Latent-a New Approach to Self-Supervised Learning: We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches $74.3\%$ top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and $79.6\%$ with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub.) <|cite_end|> <|cite_start|> (Reference: Emerging Properties in Self-Supervised Vision Transformers: In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.) <|cite_end|> <|cite_start|> (Reference: Image BERT Pre-training with Online Tokenizer: ) <|cite_end|> <|cite_start|> (Reference: Mugs: A Multi-Granular Self-Supervised Learning Framework: In self-supervised learning, multi-granular features are heavily desired though rarely investigated, as different downstream tasks (e.g., general and fine-grained classification) often require different or multi-granular features, e.g.~fine- or coarse-grained one or their mixture. In this work, for the first time, we propose an effective MUlti-Granular Self-supervised learning (Mugs) framework to explicitly learn multi-granular visual features. Mugs has three complementary granular supervisions: 1) an instance discrimination supervision (IDS), 2) a novel local-group discrimination supervision (LGDS), and 3) a group discrimination supervision (GDS). IDS distinguishes different instances to learn instance-level fine-grained features. LGDS aggregates features of an image and its neighbors into a local-group feature, and pulls local-group features from different crops of the same image together and push them away for others. It provides complementary instance supervision to IDS via an extra alignment on local neighbors, and scatters different local-groups separately to increase discriminability. Accordingly, it helps learn high-level fine-grained features at a local-group level. Finally, to prevent similar local-groups from being scattered randomly or far away, GDS brings similar samples close and thus pulls similar local-groups together, capturing coarse-grained features at a (semantic) group level. Consequently, Mugs can capture three granular features that often enjoy higher generality on diverse downstream tasks over single-granular features, e.g.~instance-level fine-grained features in contrastive learning. By only pretraining on ImageNet-1K, Mugs sets new SoTA linear probing accuracy 82.1$\%$ on ImageNet-1K and improves previous SoTA by $1.1\%$. It also surpasses SoTAs on other tasks, e.g. transfer learning, detection and segmentation.) <|cite_end|> <|cite_start|> (Reference: Masked Autoencoders Are Scalable Vision Learners: This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior.) <|cite_end|> <|cite_start|> (Reference: Masked Siamese Networks for Label-Efficient Learning: We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Our code is publicly available.) <|cite_end|> <|cite_start|> (Reference: How Well Do Self-Supervised Models Transfer?: Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction. We compare their performance to a supervised baseline and show that on most tasks the best self-supervised models outperform supervision, confirming the recently observed trend in the literature. We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition, but increasingly less so for few-shot, object detection and dense prediction. No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved. Our analysis of features suggests that top self-supervised learners fail to preserve colour information as well as supervised alternatives, but tend to induce better classifier calibration, and less attentive overfitting than supervised learners.) <|cite_end|>. This is accomplished by engaging in self-imposed pretext tasks such as contrastive learning | [
"<|reference_start|> Image Transformer: Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art. <|reference_end|>",
"<|reference_start|> Rectifying the Shortcut Learning of Background for Few-Shot Learning: The category gap between training and evaluation has been characterised as one of the main obstacles to the success of Few-Shot Learning (FSL). In this paper, we for the first time empirically identify image background, common in realistic images, as a shortcut knowledge helpful for in-class classification but ungeneralizable beyond training categories in FSL. A novel framework, COSOC, is designed to tackle this problem by extracting foreground objects in images at both training and evaluation without any extra supervision. Extensive experiments carried on inductive FSL tasks demonstrate the effectiveness of our approaches. <|reference_end|>",
"<|reference_start|> Three things everyone should know about Vision Transformers: After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets. <|reference_end|>",
"<|reference_start|> Rethinking ImageNet Pre-training: We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data---a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of `pre-training and fine-tuning' in computer vision. <|reference_end|>"
] | [
14,
28,
29,
34
] | {"<|multi_cite_1_1|>": "arxiv-298443", "<|multi_cite_1_2|>": "arxiv-331363", "<|multi_cite_1_4|>": "arxiv-413217", "<|multi_cite_1_5|>": "arxiv-329937", "<|multi_cite_1_6|>": "arxiv-317712", "<|multi_cite_1_7|>": "arxiv-381947", "<|multi_cite_2_1|>": "arxiv-413217", "<|multi_cite_2_2|>": "arxiv-329937", "<|multi_cite_2_3|>": "arxiv-381947", "<|multi_cite_2_4|>": "arxiv-317712", "<|multi_cite_3_1|>": "arxiv-329937", "<|multi_cite_3_2|>": "arxiv-381947", "<|multi_cite_3_3|>": "arxiv-317712", "<|multi_cite_4_1|>": "ss-738929", "<|multi_cite_4_2|>": "arxiv-148539", "<|multi_cite_4_3|>": "arxiv-388770", "<|cite_5|>": "arxiv-298443", "<|cite_6|>": "arxiv-389769", "<|cite_7|>": "arxiv-399002", "<|cite_8|>": "arxiv-399002", "<|multi_cite_9_2|>": "arxiv-147296", "<|multi_cite_9_3|>": "ss-1290322", "<|multi_cite_9_4|>": "ss-688193", "<|multi_cite_9_5|>": "arxiv-406571", "<|cite_10|>": "arxiv-314963", "<|multi_cite_11_1|>": "arxiv-337689", "<|multi_cite_11_2|>": "ss-678770", "<|multi_cite_12_1|>": "arxiv-427789", "<|multi_cite_12_2|>": "arxiv-355467", "<|cite_13|>": "arxiv-406571", "<|cite_14|>": "arxiv-381930", "<|cite_15|>": "arxiv-138178", "<|cite_16|>": "arxiv-203876", "<|multi_cite_17_2|>": "arxiv-52559", "<|cite_18|>": "arxiv-181375", "<|multi_cite_19_1|>": "arxiv-188988", "<|multi_cite_19_2|>": "ss-1290322", "<|cite_20|>": "arxiv-188988", "<|cite_21|>": "ss-1290322", "<|multi_cite_22_2|>": "arxiv-147296", "<|cite_24|>": "arxiv-147296", "<|cite_25|>": "ss-688193", "<|cite_26|>": "ss-688193", "<|cite_27|>": "arxiv-188988", "<|cite_28|>": "arxiv-406571", "<|multi_cite_29_1|>": "arxiv-234041", "<|multi_cite_29_2|>": "arxiv-248169", "<|multi_cite_29_3|>": "arxiv-332309", "<|multi_cite_29_4|>": "ss-842786", "<|multi_cite_29_5|>": "arxiv-337689", "<|multi_cite_29_6|>": "ss-678770", "<|multi_cite_29_7|>": "arxiv-408697", "<|multi_cite_29_8|>": "arxiv-380525", "<|multi_cite_29_9|>": "arxiv-413231", "<|multi_cite_29_10|>": "arxiv-306053", "<|multi_cite_30_1|>": "arxiv-248169", "<|multi_cite_30_2|>": "arxiv-234041", "<|multi_cite_31_1|>": "arxiv-337689", "<|multi_cite_31_2|>": "ss-842786", "<|cite_32|>": "arxiv-380525", "<|multi_cite_33_1|>": "ss-678770", "<|multi_cite_33_2|>": "arxiv-408697", "<|multi_cite_33_3|>": "arxiv-413231", "<|cite_34|>": "ss-678770", "<|cite_35|>": "arxiv-74282"} |
2312.03787 | <|paper_start|> Title: Detection and Mitigation of Position Spoofing Attacks on Cooperative UAV Swarm Formations
Abstract: Detection and Mitigation of Position Spoofing Attacks on Cooperative UAV Swarm Formations: Detecting spoofing attacks on the positions of unmanned aerial vehicles (UAVs) within a swarm is challenging. Traditional methods relying solely on individually reported positions and pairwise distance measurements are ineffective in identifying the misbehavior of malicious UAVs. This paper presents a novel systematic structure designed to detect and mitigate spoofing attacks in UAV swarms. We formulate the problem of detecting malicious UAVs as a localization feasibility problem, leveraging the reported positions and distance measurements. To address this problem, we develop a semidefinite relaxation (SDR) approach, which reformulates the non-convex localization problem into a convex and tractable semidefinite program (SDP). Additionally, we propose two innovative algorithms that leverage the proximity of neighboring UAVs to identify malicious UAVs effectively. Simulations demonstrate the superior performance of our proposed approaches compared to existing benchmarks. Our methods exhibit robustness across various swarm networks, showcasing their effectiveness in detecting and mitigating spoofing attacks. {\blue Specifically, the detection success rate is improved by up to 65\%, 55\%, and 51\% against distributed, collusion, and mixed attacks, respectively, compared to the benchmarks.
Introduction
Recently, there has been a widespread utilization of unmanned aerial vehicles (UAVs) <|cite_start|> (Reference: Joint Optimization of Trajectory, Propulsion, and Thrust Powers for Covert UAV-on-UAV Video Tracking and Surveillance: Autonomous tracking of suspicious unmanned aerial vehicles (UAVs) by legitimate monitoring UAVs (or monitors) can be crucial to public safety and security. It is non-trivial to optimize the trajectory of a monitor while conceiving its monitoring intention, due to typically non-convex propulsion and thrust power functions. This article presents a novel framework to jointly optimize the propulsion and thrust powers, as well as the 3D trajectory of a solar-powered monitor which conducts covert, video-based, UAV-on-UAV tracking and surveillance. A multi-objective problem is formulated to minimize the energy consumption of the monitor and maximize a weighted sum of distance keeping and altitude changing, which measures the disguising of the monitor. Based on the practical power models of the UAV propulsion, thrust and hovering, and the model of the harvested solar power, the problem is non-convex and intangible for existing solvers. We convexify the propulsion power by variable substitution, and linearize the solar power. With successive convex approximation, the resultant problem is then transformed with tightened constraints and efficiently solved by the proximal difference-of-convex algorithm with extrapolation in polynomial time. The proposed scheme can be also applied online. Extensive simulations corroborate the merits of the scheme, as compared to baseline schemes with partial or no disguising.) <|cite_end|>, including parcel delivery <|cite_start|> (Reference: Decentralized, Privacy-Preserving Routing of Cellular-Connected Unmanned Aerial Vehicles for Joint Goods Delivery and Sensing: Unmanned aerial vehicles (UAVs) have been extensively applied to goods delivery and in-situ sensing. It becomes increasingly probable that multiple UAVs are delivering goods and carrying out sensing tasks at the same time. The destinations of the UAVs are usually required to jointly design their trajectories and sensing selections, leading to privacy concerns for the UAVs. This paper presents a new game-theoretic routing framework for joint goods delivery and sensing of multiple cellular-connected UAVs, where the UAVs minimize their energy consumption and connectivity outage, maximize their sensing reward, and ensure timely goods delivery and trajectory privacy by optimizing their trajectories and sensing task selections in a decentralized manner. The key idea is that we unify routing and sensing in a single task selection process, which is further transformed into routing on a task-time graph. Another important aspect is that we design a non-cooperative potential game for the routing on the task-time graph. A distributed strategy is developed, where each UAV only reports its sensing task selections and withholds its destination information and its best response produced by the Bellman-Ford algorithm. By this means, the destination and trajectory privacy of the UAVs are protected. Simulations show that the new game-theoretic approach can ensure timely delivery and achieve close-to-optimal solutions with significantly lower complexity compared to a centralized brute-force approach.) <|cite_end|>, {\blue radio surveillance <|cite_start|> (Reference: Deep reinforcement learning-driven reconfigurable intelligent surface-assisted radio surveillance with a fixed-wing uav: Unmanned aerial vehicles (UAVs) play a critical role in radio surveillance to decipher malicious messages, thanks to their flexibility, mobility, and likely line-of-sight (LoS) to ground targets. Reconfigurable intelligent surfaces (RISs) can potentially create radio surveillance channels towards the UAVs by passively configuring the radio environments without raising suspicion. This paper presents a new deep reinforcement learning (DRL)-driven framework for radio surveillance, where a fixed-wing UAV is employed to acquire the radio fingerprint of a suspicious transmitter (Tx) with the aid of a benign RIS. A new Twin Delayed Deep Deterministic policy gradient (TD3) model is designed to allow the UAV to learn its trajectory and the RIS configuration based on its observed transmit rate of the suspicious Tx, eliminating the need for channel state information to and from the RIS. The novel contributions include the consideration of the fixed-wing UAV, and the action and reward designed to capture the mobility constraint of the UAV. Simulations demonstrate that the new approach offers the UAV monitor an exceptional and reliable radio surveillance capability, while keeping a desired distance from the UAV to the Tx. The use of the RIS allows for significant improvements of over 37% and 59% in the eavesdropping success probability and average eavesdropping rate, respectively.) <|cite_end|> <|cite_start|> (Reference: Effective UAV Navigation for Cellular-Assisted Radio Sensing, Imaging, and Tracking: The paper develops a new cellular-assisted radio surveillance and tracking technique with an Unmanned Aerial Vehicle (UAV) being the mobile receiver and a static cellular ground base station (BS) being the illuminating source. Under the proposed framework, the resolution of the radio surveillance and imaging depends critically on the relative positions (or geometry) between the UAV, BS and target, as well as the instantaneous motions of the UAV and target. A novel UAV navigation law is developed to guarantee that after some time, the range resolution, the azimuth resolution and the distance between the UAV and the moving target will be below some given upper limits. Its mathematically rigorous analysis is presented. Simulations demonstrated the effectiveness of the developed navigation law.) <|cite_end|>}, and rescue missions <|cite_start|> (Reference: Energy Efficient Legitimate Wireless Surveillance of UAV Communications: Unmanned aerial vehicles (UAVs) enhance connectivity and accessibility for civilian and military applications. Criminals or terrorists can potentially use UAVs for committing crimes and terrorism, thus endangering public safety. In this paper, we consider that a legitimate UAV is employed to track flight of suspicious UAVs for preventing safety and security threats. To obtain flight information of the suspicious UAVs, the legitimate UAV intentionally jams the suspicious receiver so as to force the suspicious UAV to reduce its data rate, and hence increase the eavesdropping success. An energy-efficient jamming strategy is proposed for the legitimate UAV to maximize the amount of eavesdropped packets. Moreover, a tracking algorithm is developed for the legitimate UAV to track the suspicious flight by comprehensively utilizing eavesdropped packets, angle-of-arrival and received signal strength of the suspicious transmitter's signal. A new simulation framework is implemented to combine the complementary features of optimization toolbox with channel modeling (in MATLAB) and discrete event-driven mobility tracking (in NS3). Moreover, numerical results validate the proposed algorithms in terms of packet eavesdropping rate and tracking accuracy of the suspicious UAVs’ trajectory.) <|cite_end|>. This is due to the affordability and endurance of UAVs, and {\blue their flexibly adjustable positions conducive to line-of-sight (LOS) communications}, facilitated by rapid technological advancements <|cite_start|> (Reference: Trajectory Planning of Cellular-Connected UAV for Communication-Assisted Radar Sensing: Being a key technology for beyond fifth-generation wireless systems, joint communication and radar sensing (JCAS) utilizes the reflections of communication signals to detect foreign objects and deliver situational awareness. A cellular-connected unmanned aerial vehicle (UAV) is uniquely suited to form a mobile bistatic synthetic aperture radar (SAR) with its serving base station (BS) to sense over large areas with superb sensing resolutions at no additional requirement of spectrum. This paper designs this novel BS-UAV bistatic SAR platform, and optimizes the flight path of the UAV to minimize its propulsion energy and guarantee the required sensing resolutions on a series of interesting landmarks. A new trajectory planning algorithm is developed to convexify the propulsion energy and resolution requirements by using successive convex approximation and block coordinate descent. Effective trajectories are obtained with a polynomial complexity. Extensive simulations reveal that the proposed trajectory planning algorithm outperforms significantly its alternative that minimizes the flight distance of cellular-aided sensing missions in terms of energy efficiency and effective consumption fluctuation. The energy saving offered by the proposed algorithm can be as significant as 55%.) <|cite_end|>.
{\blue The varieties of practical needs for UAV swarms further ignite and necessitate the protection of security for UAV swarms <|cite_start|> (Reference: Secrecy Performance of Terrestrial Radio Links Under Collaborative Aerial Eavesdropping: Motivated to understand the increasingly severe threat of unmanned aerial vehicles (UAVs) to the confidentiality of terrestrial radio links, this paper analyzes the ergodic and $\epsilon $ -outage secrecy capacities of the links in the presence of multiple cooperative aerial eavesdroppers flying autonomously in three-dimensional (3D) spaces and exploiting selection combining (SC) or maximal ratio combining (MRC). The “cut-off” density of the eavesdroppers under which the secrecy capacities vanish is identified. By decoupling the analysis of the random trajectories from the random channel fading, closed-form approximations with almost sure convergence to the secrecy capacities are devised. The analysis is extended to study the impact of the oscillator phase noises and finite memories of the aerial eavesdroppers on the secrecy performance of the ground link. Validated by simulations, the cut-off density only depends on the range of the link in the case of SC eavesdropping, while it depends on the flight region of the eavesdroppers in the case of MRC eavesdropping.) <|cite_end|>.
For instance, the reliability of information propagation has been analyzed in large-scale networks, including UAV swarms <|cite_start|> (Reference: Reliability analysis of large-scale adaptive weighted networks: Disconnecting impaired or suspicious nodes and rewiring to those reliable, adaptive networks have the potential to inhibit cascading failures, such as DDoS attack and computer virus. The weights of disconnected links, indicating the workload of the links, can be transferred or redistributed to newly connected links to maintain network operations. Distinctively different from existing studies focused on adaptive unweighted networks, this paper presents a new mean-field model to analyze the reliability of adaptive weighted networks against cascading failures. By taking mean-field approximation, we develop a new continuous-time Markov model to capture the propagations of cascading failures and the rewiring actions that individual nodes can take to bypass failed neighbors. We analyze the stability of the model to identify the critical conditions, under which the cascading failures can be eventually inhibited or would proliferate. The conditions are evaluated under different link weight distributions and rewiring strategies. Our model reveals that preferentially disconnecting suspicious peers with high weights can effectively inhibit virus and failures.) <|cite_end|>.
The connectivity of a UAV swarm has been studied in the presence of jamming attacks from the ground <|cite_start|> (Reference: Ensuring network connectivity of UAV’s performing video reconnaissance: Current and emerging military missions require image-feedback from camera-equipped assets. Motivated by mission scenarios and sensor restrictions, operations may require the collaboration of assets over an ad-hoc network. The development in this paper represents the first efforts to examine the problem of balancing tradeoffs between asset/sensor cone positioning to satisfy mission requirements and network requirements such as maintaining network connectivity. To address the trade-offs between asset positioning and network connectivity, a prioritized task-function based guidance law is developed for a simple scenario containing three assets. One developed task function maintains a communication network by ensuring the distance between the UAVpsilas does not exceed a critical threshold. Additional task-functions enable assets to keep targets of interest in the image cone by regulating image features derived from the camera view. Simulation results are provided to examine the behavior of the assets for different configurations of objects observed by the asset cameras.) <|cite_end|>.}
To enhance reliability and mitigate potential flight collisions, it is crucial to establish a formation flight and coordination among UAVs <|cite_start|> (Reference: A Comprehensive Survey on Applications of AI Technologies to Failure Analysis of Industrial Systems: ) <|cite_end|>.
In the formation flight of a UAV swarm, individual UAVs rely on position reports from their peers and their pairwise distance measurements with neighboring UAVs to maintain inter-UAV distances and avoid collisions. Compromised or malicious UAVs can launch position spoofing attacks, potentially leading to catastrophic consequences for the UAV swarm <|cite_start|> (Reference: Poster: May the Swarm Be With You: Sensor Spoofing Attacks Against Drone Swarms: Swarm robotics, particularly drone swarms, are used in various safety-critical tasks. While a lot of attention has been paid to improving swarm control algorithms for improved intelligence, the security implications of various design choices in swarm control algorithms have not been studied. We highlight how an attacker can exploit the vulnerabilities in swarm control algorithms to disrupt drone swarms. Specifically, we show that the attacker can target one swarm member (target drone) through sensor spoofing attacks, and indirectly cause other swarm members (victim drones) to veer off from their course, and potentially resulting in a crash. Our attack cannot be prevented by traditional software security techniques, and it is stealthy in nature as it causes seemingly benign deviations in drone swarms. Our initial results show that spoofing the position of a target drone by 5m is sufficient to cause other drones to crash into a front obstacle. Overall, our attack achieves 76.67% and 93.33% success rate with 5m and 10m spoofing deviation respectively.) <|cite_end|>. A malicious UAV might transmit a deceptive position report, misleading other UAVs while simultaneously concealing its true location, as illustrated in Fig. \ref{F:schema}.
Such conditions can disrupt the control mechanism that maintains swarm formation, resulting in disorders <|cite_start|> (Reference: Detecting Attacks Against Robotic Vehicles: A Control Invariant Approach: Robotic vehicles (RVs), such as drones and ground rovers, are a type of cyber-physical systems that operate in the physical world under the control of computing components in the cyber world. Despite RVs' robustness against natural disturbances, cyber or physical attacks against RVs may lead to physical malfunction and subsequently disruption or failure of the vehicles' missions. To avoid or mitigate such consequences, it is essential to develop attack detection techniques for RVs. In this paper, we present a novel attack detection framework to identify external, physical attacks against RVs on the fly by deriving and monitoring Control Invariants (CI). More specifically, we propose a method to extract such invariants by jointly modeling a vehicle's physical properties, its control algorithm and the laws of physics. These invariants are represented in a state-space form, which can then be implemented and inserted into the vehicle's control program binary for runtime invariant check. We apply our CI framework to eleven RVs, including quadrotor, hexarotor, and ground rover, and show that the invariant check can detect three common types of physical attacks -- including sensor attack, actuation signal attack, and parameter attack -- with very low runtime overhead.) <|cite_end|>.
\begin{figure}[t]
\center{\includegraphics[width=7.5cm] {schema_uav2.png}}
\caption{\label{1} An illustration of the attack model, where a malicious UAV falsifies its position and broadcasts the fake position to the other benign UAVs.}
\label{F:schema}
\end{figure}
Detecting and identifying malicious UAVs within a swarm is challenging.
This problem seeks to establish whether a feasible position realization for each UAV that aligns with all reported distances and measurements exists.
Such a feasibility problem is non-trivial and non-convex, and has never been studied and addressed in the existing literature.
As delineated in this paper, the problem can be transformed into a convex semidefinite program (SDP), allowing for efficient use of
convex optimization solvers in polynomial time.
However, solving the SDP problem alone does not enable the identification of individual malicious UAVs.
On the one hand, the number of malicious UAVs is typically unknown and needs to be detected.
On the other hand, the effectiveness of SDP can be penalized by the interdependence among the positions of neighboring UAVs.
To address these challenges and precisely identify malicious UAVs, this paper proposes two new algorithms: the Cooperative Detection and Identification (CDI) algorithm and the Enhanced CDI (E-CDI) algorithm.
The CDI algorithm initiates its process by creating sets of possible malicious and benign UAVs. Subsequently, it combines the selected potentially malicious UAVs with the benign set, establishing a connected sub-network for the SDP-based position feasibility check. If all the neighboring UAVs of a selected UAV are themselves malicious, the CDI algorithm may misjudge the UAV as malicious, as attempting to localize a sub-network with an entire malicious neighborhood is inherently unfeasible.
{\blue In contrast}, the E-CDI algorithm conducts an additional localization feasibility check on each individual UAV in the neighborhood, compared to the CDI algorithm.
By this means, collusion attacks launched by multiple closely located, malicious UAVs can be detected and mitigated.
{\blue Compared to the existing relevant works, e.g., <|cite_start|> (Reference: Poster: May the Swarm Be With You: Sensor Spoofing Attacks Against Drone Swarms: Swarm robotics, particularly drone swarms, are used in various safety-critical tasks. While a lot of attention has been paid to improving swarm control algorithms for improved intelligence, the security implications of various design choices in swarm control algorithms have not been studied. We highlight how an attacker can exploit the vulnerabilities in swarm control algorithms to disrupt drone swarms. Specifically, we show that the attacker can target one swarm member (target drone) through sensor spoofing attacks, and indirectly cause other swarm members (victim drones) to veer off from their course, and potentially resulting in a crash. Our attack cannot be prevented by traditional software security techniques, and it is stealthy in nature as it causes seemingly benign deviations in drone swarms. Our initial results show that spoofing the position of a target drone by 5m is sufficient to cause other drones to crash into a front obstacle. Overall, our attack achieves 76.67% and 93.33% success rate with 5m and 10m spoofing deviation respectively.) <|cite_end|> <|cite_start|> (Reference: Smart {GPS: 본 논문에서는 L1주파수대역(1.575GHz)을 사용하는 보안등제어기 내장형 무선 모듈에 적용 가능한 PCB(Printed Circuit Board) 인쇄형 GPS안테나를 제안하였다. 제안된 안테나는 삽입형 급전(insertion feeding)을 가지는 마이크로스트립 패치안테나를 기본구조로 하였다. 특히 임피던스 매칭을 위해 윗면에 좌/우 2개의 비대 칭형 슬롯을 삽입하였으며, 아랫면에는 대역폭 증가와 주파수설정을 위하여 한 쪽 끝이 개방된 슬롯과 단락점(shorting point)을 적용하였다. 측정 결과 설계 목표로 한 GPS주파수 L1대역에서 90%의 방사효율과 4.8dBi 이상의 이득을 얻을 수 있었다.) <|cite_end|> <|cite_start|> (Reference: Distributed 3-D Bearing-Only Orientation Localization: We propose a method to recover the orientation of a set of agents with respect to a global reference frame using local bearing measurements alone. Our method is distributed, does not require prior rotation information, and considers the full 3-D version of the problem. We identify sufficient localizability conditions on the directed graph of measurements, propose an algorithm based on distributed Riemannian gradient descent to recover a localization, and verify our theoretical results with simulations.) <|cite_end|> <|cite_start|> (Reference: Detecting Attacks Against Robotic Vehicles: A Control Invariant Approach: Robotic vehicles (RVs), such as drones and ground rovers, are a type of cyber-physical systems that operate in the physical world under the control of computing components in the cyber world. Despite RVs' robustness against natural disturbances, cyber or physical attacks against RVs may lead to physical malfunction and subsequently disruption or failure of the vehicles' missions. To avoid or mitigate such consequences, it is essential to develop attack detection techniques for RVs. In this paper, we present a novel attack detection framework to identify external, physical attacks against RVs on the fly by deriving and monitoring Control Invariants (CI). More specifically, we propose a method to extract such invariants by jointly modeling a vehicle's physical properties, its control algorithm and the laws of physics. These invariants are represented in a state-space form, which can then be implemented and inserted into the vehicle's control program binary for runtime invariant check. We apply our CI framework to eleven RVs, including quadrotor, hexarotor, and ground rover, and show that the invariant check can detect three common types of physical attacks -- including sensor attack, actuation signal attack, and parameter attack -- with very low runtime overhead.) <|cite_end|>, the new contributions of this paper include:
\begin{enumerate}
\item To detect position spoofing attacks, we propose a novel mechanism for malicious UAV detection and identification, where we cast the challenging malicious UAV detection problem as a localization feasibility problem.
\item A semidefinite relaxation (SDR) approach is put forth to transform the non-convex feasibility problem into a convex problem. The presence of malicious UAVs can then be efficiently ascertained by evaluating the feasibility of the convex problem.
\item We develop two iterative algorithms, i.e., CDI and E-CDI, to identify malicious UAVs by leveraging the proximity of neighboring UAVs.
\begin{itemize}
\item
The CDI algorithm dynamically merges selected potentially malicious UAVs into the benign set to form a connected positioning sub-network. This sub-network is used to determine whether the selected UAV is malicious.
\item
The E-CDI algorithm enhances identification efficiency by further assessing each neighboring UAV in the neighborhood of a potentially malicious UAV. As a result, collusion attacks launched by multiple closely located, malicious UAVs can be detected.
\end{itemize}
Both algorithms are designed to conclude within a finite number of iterations and exhibit robust performance across various network configurations of UAV swarms.
\end{enumerate}
Extensive simulations demonstrate that the proposed CDI and E-CDI algorithms achieve superior performance on classic metrics compared to the benchmark techniques.}
{\blue Under the proposed algorithms, the detection success rate can be improved by up to 65\%, 55\%, and 51\% against distributed, collusion, and mixed attacks, respectively, compared to their benchmarks.}
The rest of this paper is organized as follows. Section II reviews the related works. Section III formulates and convexifies the malicious UAV’s misbehavior detection problem. In Section IV, two efficient iterative algorithms are proposed to identify malicious UAVs. Section V provides numerical results to evaluate the proposed algorithm, followed by conclusions in Section VI.
{\color{black}
\textit{Notation:}
Upper- and lower-case boldface symbols denote matrices and vectors, respectively;
$|\cdot|$ takes the absolute value if a scalar is concerned or the cardinality if a set is concerned; $\left\|\cdot \right\| $ denotes $\ell_2$-norm; $\hat{(\cdot)}$ indicates a reported, noise-corrupted version of $(\cdot)$.
The notation used is collated in Tab.~\ref{table_symbols}.
}
\begin{table}
\centering
\caption{\color{black} Notation and definition.}
\label{table_symbols}
\begin{tabular}{p{0.75cm} p{6.9cm}}
\hline
\textbf{Notation} & \textbf{Definition} \\ \hline
${\cal X}$ & The set of the 3D coordinates
of all the UAVs \\
$N$ & The total number of UAVs \\
$\boldsymbol{x}_{i}$ & The actual position of the $i$-th UAV \\
$\hat{\boldsymbol{x}}_{i}$ & The reported position of the $i$-th UAV \\
$r_{ij}$ & The actual distance between UAVs~$i$ and $j$, $i\neq j$ \\
$\hat{r}_{ij}$ & The reported distance between UAVs~$i$ and $j$, $i\neq j$ \\
$\hat{\alpha}_{ij}$ & An auxiliary
variable
\\
$\boldsymbol{w}_{i}$ & The noise vector for position measurement of UAV $i$ \\
$w_{ij}$ & The noise in the reported distance measurement between UAVs~$i$ and~$j$, $i\neq j$ \\
$\boldsymbol{I}_{3}$ & The $3\times 3$ identity matrix \\
$\boldsymbol{X}$ & The $3 \times N$ matrix with its $i$-th column being $\boldsymbol{x}_{i}$ \\
$d$ & The communication range for distance measurement \\
$\epsilon$ & A small constant, e.g., $1\times10^{-6}$ \\
$\boldsymbol{e}_{i}$ & The vector whose $i$-th element is one and
the rest are zeros. \\
$\rho_{ij}$ & The indicator of whether UAVs $i$ and $j$ are directly connected. \\
$\boldsymbol{E}_{n}$ & The matrix of the measured and reported Euclidean distances between directly connected UAVs
\\
$\boldsymbol{E}_{r}$ & The matrix of the Euclidean distances between directly connected UAVs generated based on the reported positions of the UAVs \\
$\mathbb{N}$ & The set of all $N$ UAVs. \\
$\mathbb{M}$ & The set of malicious
UAVs \\
$\mathbb{B}$ & The set of benign UAVs \\
$\mathbb{N}_{k}$ & The set of the one-hop neighbors of UAV $k$ \\
$R_M$ & The malicious ratio, i.e., the ratio of the number of malicious UAVs to the total number of UAVs in a UAV swarm \\ \hline
\end{tabular}
\end{table}
Related Work
\subsection{Spoofing Attacks on UAVs}
Spoofing attacks on UAVs have been extensively investigated in the recent literature.
However, most works have focused on the direct hijack of the Global Positioning System (GPS) of a specific single UAV. Aiming at identifying fake GPS coordinates due to the hijack of the GPS communication software, the authors of <|cite_start|> (Reference: DeepPOSE: Detecting GPS spoofing attack via deep recurrent neural network: The Global Positioning System (GPS) has become a foundation for most location-based services and navigation systems, such as autonomous vehicles, drones, ships, and wearable devices. However, it is a challenge to verify if the reported geographic locations are valid due to various GPS spoo fi ng tools. Pervasive tools, such as Fake GPS, Lockito, and software-de fi ned radio, enable ordinary users to hijack and report fake GPS coordinates and cheat the monitoring server without being detected. Furthermore, it is also a challenge to get accurate sensor readings on mobile devices because of the high noise level introduced by commercial motion sensors. To this end, we propose DeepPOSE, a deep learning model, to address the noise introduced in sensor readings and detect GPS spoo fi ng attacks on mobile platforms. Our design uses a convolutional and recurrent neural network to reduce the noise, to recover a vehicle's real-time trajectory from multiple sensor inputs. We further propose a novel scheme to map the constructed trajectory from sensor readings onto the Google map, to smartly eliminate the accumulation of errors on the trajectory estimation. The reconstructed trajectory from sensors is then used to detect the GPS spoo fi ng attack. Compared with the existing method, the proposed approach demonstrates a signi fi cantly higher degree of accuracy for detecting GPS spoo fi ng attacks.) <|cite_end|> proposed a convolutional neural network (CNN) integrated with a recurrent neural network (RNN) to predict a vehicle's real-time trajectory based on the data from multiple sensors.
With a similar purpose of handling GPS spoofing attacks, the authors of <|cite_start|> (Reference: GPS: 현재 진행되고 있는 지능형 위치 기반 서비스 연구는 대부분 GPS를 사용하여 개인의 위치를 관리하는 것에 초점이 맞추어져있다. 본 논문에서는 특정 장소로 이동하기 위한 개인적인 경로의 효율적인 학습과 예측 기법이 제안된다. 먼저 이미지 처리 기술을 사용하여 개인적인 경로를 학습하기 위한 기법을 연구하였다. 시간 정보와 공간 정보를 분리하여 처리하는 본 기법은 경로 모델의 구축 정확도와 속도 측면에서 좋은 성능을 보였다. 두 번째로 경로 모델과 사용자가 현재 이동하는 궤적을 표현하는 GPS 좌표배열을 기반으로 사용자의 이동 경로를 예측하는 기법을 제안한다. 본 논문에서 제안하는 기법을 스마트폰상에서 구현하여 평가한 결과, 경로 예측에 대한 정확도는 94%를 보였다.) <|cite_end|> proposed a two-step approach based on data sensed and fused from distributed radar ground stations equipped with a local tracker. The approach consists of spoofing detection and mitigation. In the spoofing detection step, a track-to-track association approach was adopted to detect spoofing attacks with fused data from UAVs and a local tracker. In the mitigation step, the fused data was input to a controller to mitigate the spoofing attack detected. The proposed two-step approach was reported to achieve almost the same accuracy as GPS efficiently.
To enhance the reliability of flight controllers when the UAV is under GPS spoofing attack, the authors of utilized an extended Kalman filter (EKF)-based approach.
They investigated the impact of GPS spoofing on the EKF estimation and the UAV itself under different levels of attack strength. It was reported that the classic EKF-based approach can tolerate small errors from spoofing attacks, but can be inefficient when the attack intensifies.
Similar works on GPS-related spoofing attacks on a specific single UAV can be found in <|cite_start|> (Reference: Smart {GPS: 본 논문에서는 L1주파수대역(1.575GHz)을 사용하는 보안등제어기 내장형 무선 모듈에 적용 가능한 PCB(Printed Circuit Board) 인쇄형 GPS안테나를 제안하였다. 제안된 안테나는 삽입형 급전(insertion feeding)을 가지는 마이크로스트립 패치안테나를 기본구조로 하였다. 특히 임피던스 매칭을 위해 윗면에 좌/우 2개의 비대 칭형 슬롯을 삽입하였으며, 아랫면에는 대역폭 증가와 주파수설정을 위하여 한 쪽 끝이 개방된 슬롯과 단락점(shorting point)을 적용하였다. 측정 결과 설계 목표로 한 GPS주파수 L1대역에서 90%의 방사효율과 4.8dBi 이상의 이득을 얻을 수 있었다.) <|cite_end|>, and spoofing attacks related to the time-of-arrival (TOA) or time difference-of-arrival (TDOA) can be found in <|cite_start|> (Reference: {GNSS: This paper inspected about the application of transportation and the character of GPS&Galileo using the basis of GNSS. In the paper, searched the present condition of study which applied foreign GNSS in the railway and drew the applicable field which is corresponded with the railway environment. It also described about the embodiment and prospection of system and presented the direction of study on the basis of it.) <|cite_end|>.
The security issue of UAV swarms has attracted increasing attention. In order to mitigate the navigation spoofing attacks on aerial formations, the authors of <|cite_start|> (Reference: Robust Localization for Secure Navigation of UAV Formations Under GNSS Spoofing Attack: Nowadays, aerial formations are frequently employed in outdoor scenarios to cooperatively explore and monitor wide areas of interest. In these applications, the vehicles are often exposed to relevant security vulnerabilities, as, for instance, the alteration of navigation signals from an attacker with map counterfeiting (if not even hijacking) purposes. In this work, we focus on an Unmanned Aerial Vehicle (UAV) formation that monitors an area, wherein navigation spoofing attacks may occur. Letting the UAVs cooperate and exploiting the redundancy in the available sensing information, a distributed procedure is proposed to $i$ ) detect spoofing attacks, and $ii$ ) support the navigation in adverse conditions. The validity of the designed approach is confirmed by numerical results. Aerial vehicles for outdoor operation are generally endowed with inertial measurements, relative ranging, and GNSS sensing capability. In this work, two cascaded estimation algorithms for concurrent GNSS spoofing detection and localization in a multi-UAV scenario is proposed, to attain robust navigation in areas subject to GNSS spoofing attacks. The attack detection leverages on information theoretic tools to provide a practical threshold test by checking the multimodal measurement consistency. The localization procedures exploit a decision logic relying on measurement reliability to combine information sources that are different in nature, for UAV self-localization in both safe and under-attack conditions. Note to Practitioners—Aerial vehicles for outdoor operation are generally endowed with inertial measurements, relative ranging, and GNSS sensing capability. In this work, two cascaded estimation algorithms for concurrent GNSS spoofing detection and localization in a multi-UAV scenario is proposed, to attain robust navigation in areas subject to GNSS spoofing attacks. The attack detection leverages on information theoretic tools to provide a practical threshold test by checking the multimodal measurement consistency. The localization procedures exploit a decision logic relying on measurement reliability to combine information sources that are different in nature, for UAV self-localization in both safe and under-attack conditions.) <|cite_end|> proposed a cascaded estimation algorithm used for concurrent GPS spoofing detection and localization. An attack detection module was based on the consistency of multimodal measurement to realize threshold tests. A localization module was then used for a decision based on remarkable differences between safe and under-attack conditions of UAV self-localization. The cascaded approach can achieve a safe self-localization for a UAV swarm under a spoofing attack. Aiming at solving the GPS spoofing attack in a UAV swarm, the authors of <|cite_start|> (Reference: {SAMA: The purpose of this paper is to consider SAMA(Saudi Arabia Monetary Agency; Saudi central bank) focusing on its charter and outline history. SAMA was...) <|cite_end|> proposed a security-aware monitoring method to monitor the potential malicious UAVs and protect the benign ones from attacks. The method was implemented by the received-signal-strength-indicator (RSSI)-based triangulation.
\subsection{Cooperative Network Localization}
Position-related spoofing attacks destroy the localization of UAVs in a UAV swarm, since a UAV swarm can be considered a cooperative network.
SDP, an efficient convex optimization approach, has been extensively applied to cooperative network localization. Employing the SDP, the authors of <|cite_start|> (Reference: Three-Dimensional Cooperative Positioning for Internet of Things Provenance: A large number of Internet of Things (IoT) devices have been interconnected for information collection and exchange. The data are only meaningful if it is captured at the expected location (i.e., the IoT devices or sensors are not removed accidentally or intentionally). This article presents a new algorithm, which cooperatively locates multiple IoT devices deployed in a 3-D space based on pairwise Euclidean distance measurements. When the distance measurement noises are negligible, a new feasibility problem of rank-3 variables is formulated. We solve the problem using the difference-of-convex (DC) programming to preserve the rank-3 constraints, rather than relaxing the constraints, using semidefinite relaxation (SDR). When the distance measurements are corrupted by additive noises and nonlight-of-sight (NLOS) propagation, a maximum-likelihood estimation (MLE) problem is formulated and transformed to a DC program solved with the rank-3 constraints preserved. Simulation results indicate that the proposed approach can achieve satisfactory accuracy results with a low complexity and strong robustness to the irregular topology, poor connectivity, and measurement errors, as compared to existing SDR-based alternatives.) <|cite_end|> proposed a novel difference-of-convex (DC)-based algorithm to achieve accurate cooperative localization.
The authors of <|cite_start|> (Reference: Bearing-based Relative Localization for Robotic Swarm with Partially Mutual Observations: Mutual localization provides a consensus of reference frame as an essential basis for cooperation in multirobot systems. Previous works have developed certifiable and robust solvers for relative transformation estimation between each pair of robots. However, recovering relative poses for robotic swarm with partially mutual observations is still an unexploited problem. In this paper, we present a complete algorithm for it with optimality, scalability and robustness. Firstly, we fuse all odometry and bearing measurements in a unified minimization problem among the Stiefel manifold. Furthermore, we relax the original non-convex problem into a semi-definite programming (SDP) problem with a strict tightness guarantee. Then, to hold the exactness in noised cases, we add a convex (linear) rank cost and apply a convex iteration algorithm. We compare our approach with local optimization methods on extensive simulations with different robot amounts under various noise levels to show our global optimality and scalability advantage. Finally, we conduct real-world experiments to show the practicality and robustness.) <|cite_end|> proposed an SDP-based method to estimate the relative transformation of a robot in a cooperative robotic swarm. The SDP-based method could achieve global optimality and scalability. The authors of <|cite_start|> (Reference: Efficient Scheduling in Space-Air-Ground Integrated Localization Networks: High accuracy and seamless position information formulates the basis of many modern wireless applications, such as the Internet of Things (IoT) and intelligent transportation systems (ITSs). In this article, aiming at the ground user equipment (UE) those in the “blind spots,” where only limited navigation signals are provided, the temporary aerial-aided “anchors” such as the unmanned aerial vehicles (UAVs) are introduced as alternating solutions. We first give the general fundamental limits of the three-dimensional space–air–ground-integrated localization networks (SAGILNs) using both time and angle measurements. Unlike most existing investigations, we treat aerial nodes as “agents” whose positions are not known beforehand. We then try to formulate an efficient scheduling strategy, where proper network behaviors, including the resource optimization and UAV deployment, are provided. We find that the proposed scheduling problems could be formulated as standard semidefinite programming (SDP) problems and solved by off-the-shelf solvers. Numerical results are provided to validate our analysis. The proposed methods and analyses provide meaningful insights for performance benchmarks for the implementation of SAGILN.) <|cite_end|> developed an efficient SDP-based scheduling strategy to optimize UAV deployment in intelligent transportation cooperative networks. More SDP-based cooperative localization techniques can be found in <|cite_start|> (Reference: Distributed 3-D Bearing-Only Orientation Localization: We propose a method to recover the orientation of a set of agents with respect to a global reference frame using local bearing measurements alone. Our method is distributed, does not require prior rotation information, and considers the full 3-D version of the problem. We identify sufficient localizability conditions on the directed graph of measurements, propose an algorithm based on distributed Riemannian gradient descent to recover a localization, and verify our theoretical results with simulations.) <|cite_end|> <|cite_start|> (Reference: 3-{D: In grinding, it is required to clarify the processing phenomenon and the process to be made intelligent. The acquisition of detailed tool information is necessary for the purposes. Consequently, not a small number of measurements for wheel working face has been reported. This measurement is expected to be high-speed and performed on the machine. However, there is no measuring system, which is suitable for these purposes. Therefore, this paper describes a high-speed 3-D measuring system for wheel working face on the machine. In this system, a sensing head similar to a laser displacement meter is used. The sensing head is attached to grinding head by two automatic stages. The wheel rotating at 30m/s peripheral speed is scanned by these stages. The scanning resolution is 3μm for X, Y, and Z respectively. This system essentially does not have any limitation for the measurement area. The results of measurements are studied experimentally. As a result, the data obtained by this system is in good agreement with that by a laser microscope.) <|cite_end|> <|cite_start|> (Reference: Highly Energy-Efficient Resource Allocation in UAV Networks: The payload of drones in UAV networks is limited, making effective resource allocation essential for the networks. This paper explores UAV networks from the perspective of energy efficiency and formulates a multi-objective optimization problem to balance energy efficiency. We employ the Weighted Chebyshev method and Deep Deterministic Policy Gradients (DDPG) method to address this. Furthermore, we compare and evaluate these two methods, taking into account their anti-interference effects. Simulation results demonstrate the outstanding performance of the proposed algorithms.) <|cite_end|>.
Unlike existing works that rely heavily on extensive training using historical data, we put forth an SDP-based UAV misbehavior detection mechanism with no need for historical data. The proposed SDP-based mechanism detects and identifies malicious UAVs that misreport their positions by leveraging the proximity of neighboring UAVs. This mechanism is applicable regardless of the specific type of localization signals hijacked and spoofed, including GPS, TDA, or TDOA. <|paper_end|> | [
"<|reference_start|> Energy Efficient Legitimate Wireless Surveillance of UAV Communications: Unmanned aerial vehicles (UAVs) enhance connectivity and accessibility for civilian and military applications. Criminals or terrorists can potentially use UAVs for committing crimes and terrorism, thus endangering public safety. In this paper, we consider that a legitimate UAV is employed to track flight of suspicious UAVs for preventing safety and security threats. To obtain flight information of the suspicious UAVs, the legitimate UAV intentionally jams the suspicious receiver so as to force the suspicious UAV to reduce its data rate, and hence increase the eavesdropping success. An energy-efficient jamming strategy is proposed for the legitimate UAV to maximize the amount of eavesdropped packets. Moreover, a tracking algorithm is developed for the legitimate UAV to track the suspicious flight by comprehensively utilizing eavesdropped packets, angle-of-arrival and received signal strength of the suspicious transmitter's signal. A new simulation framework is implemented to combine the complementary features of optimization toolbox with channel modeling (in MATLAB) and discrete event-driven mobility tracking (in NS3). Moreover, numerical results validate the proposed algorithms in terms of packet eavesdropping rate and tracking accuracy of the suspicious UAVs’ trajectory. <|reference_end|>",
"<|reference_start|> Poster: May the Swarm Be With You: Sensor Spoofing Attacks Against Drone Swarms: Swarm robotics, particularly drone swarms, are used in various safety-critical tasks. While a lot of attention has been paid to improving swarm control algorithms for improved intelligence, the security implications of various design choices in swarm control algorithms have not been studied. We highlight how an attacker can exploit the vulnerabilities in swarm control algorithms to disrupt drone swarms. Specifically, we show that the attacker can target one swarm member (target drone) through sensor spoofing attacks, and indirectly cause other swarm members (victim drones) to veer off from their course, and potentially resulting in a crash. Our attack cannot be prevented by traditional software security techniques, and it is stealthy in nature as it causes seemingly benign deviations in drone swarms. Our initial results show that spoofing the position of a target drone by 5m is sufficient to cause other drones to crash into a front obstacle. Overall, our attack achieves 76.67% and 93.33% success rate with 5m and 10m spoofing deviation respectively. <|reference_end|>",
"<|reference_start|> Detecting Attacks Against Robotic Vehicles: A Control Invariant Approach: Robotic vehicles (RVs), such as drones and ground rovers, are a type of cyber-physical systems that operate in the physical world under the control of computing components in the cyber world. Despite RVs' robustness against natural disturbances, cyber or physical attacks against RVs may lead to physical malfunction and subsequently disruption or failure of the vehicles' missions. To avoid or mitigate such consequences, it is essential to develop attack detection techniques for RVs. In this paper, we present a novel attack detection framework to identify external, physical attacks against RVs on the fly by deriving and monitoring Control Invariants (CI). More specifically, we propose a method to extract such invariants by jointly modeling a vehicle's physical properties, its control algorithm and the laws of physics. These invariants are represented in a state-space form, which can then be implemented and inserted into the vehicle's control program binary for runtime invariant check. We apply our CI framework to eleven RVs, including quadrotor, hexarotor, and ground rover, and show that the invariant check can detect three common types of physical attacks -- including sensor attack, actuation signal attack, and parameter attack -- with very low runtime overhead. <|reference_end|>",
"<|reference_start|> Robust Localization for Secure Navigation of UAV Formations Under GNSS Spoofing Attack: Nowadays, aerial formations are frequently employed in outdoor scenarios to cooperatively explore and monitor wide areas of interest. In these applications, the vehicles are often exposed to relevant security vulnerabilities, as, for instance, the alteration of navigation signals from an attacker with map counterfeiting (if not even hijacking) purposes. In this work, we focus on an Unmanned Aerial Vehicle (UAV) formation that monitors an area, wherein navigation spoofing attacks may occur. Letting the UAVs cooperate and exploiting the redundancy in the available sensing information, a distributed procedure is proposed to $i$ ) detect spoofing attacks, and $ii$ ) support the navigation in adverse conditions. The validity of the designed approach is confirmed by numerical results. Aerial vehicles for outdoor operation are generally endowed with inertial measurements, relative ranging, and GNSS sensing capability. In this work, two cascaded estimation algorithms for concurrent GNSS spoofing detection and localization in a multi-UAV scenario is proposed, to attain robust navigation in areas subject to GNSS spoofing attacks. The attack detection leverages on information theoretic tools to provide a practical threshold test by checking the multimodal measurement consistency. The localization procedures exploit a decision logic relying on measurement reliability to combine information sources that are different in nature, for UAV self-localization in both safe and under-attack conditions. Note to Practitioners—Aerial vehicles for outdoor operation are generally endowed with inertial measurements, relative ranging, and GNSS sensing capability. In this work, two cascaded estimation algorithms for concurrent GNSS spoofing detection and localization in a multi-UAV scenario is proposed, to attain robust navigation in areas subject to GNSS spoofing attacks. The attack detection leverages on information theoretic tools to provide a practical threshold test by checking the multimodal measurement consistency. The localization procedures exploit a decision logic relying on measurement reliability to combine information sources that are different in nature, for UAV self-localization in both safe and under-attack conditions. <|reference_end|>"
] | [
4,
10,
11,
20
] | {"<|cite_1|>": "ss-764523", "<|cite_2|>": "ss-2250575", "<|multi_cite_3_1|>": "ss-2249689", "<|multi_cite_3_2|>": "ss-2250576", "<|cite_4|>": "ss-2018355", "<|cite_5|>": "ss-2250577", "<|cite_6|>": "ss-764520", "<|cite_7|>": "ss-1324346", "<|cite_8|>": "ss-2250578", "<|cite_9|>": "ss-2250579", "<|cite_10|>": "ss-2250580", "<|cite_11|>": "ss-1257078", "<|multi_cite_12_1|>": "ss-2250580", "<|multi_cite_12_2|>": "ss-2250581", "<|multi_cite_12_3|>": "ss-979548", "<|multi_cite_12_4|>": "ss-1257078", "<|cite_13|>": "ss-2250583", "<|cite_14|>": "ss-1029237", "<|multi_cite_16_4|>": "ss-2250581", "<|cite_17|>": "ss-679624", "<|cite_18|>": "ss-2250584", "<|cite_19|>": "ss-2250585", "<|cite_20|>": "ss-2250586", "<|cite_21|>": "arxiv-454251", "<|cite_22|>": "ss-2250587", "<|multi_cite_23_1|>": "ss-979548", "<|multi_cite_23_2|>": "ss-1516876", "<|multi_cite_23_3|>": "ss-1247191"} |
2206.06072 | <|paper_start|> Title: Rank Diminishing in Deep Neural Networks
Abstract: Rank Diminishing in Deep Neural Networks: The rank of neural networks measures information flowing across layers. It is an instance of a key structural condition that applies across broad domains of machine learning. In particular, the assumption of low-rank feature representations leads to algorithmic developments in many architectures. For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear. To fill this gap, we perform a rigorous study on the behavior of network rank, focusing particularly on the notion of rank deficiency. We theoretically establish a universal monotonic decreasing property of network rank from the basic rules of differential and algebraic composition, and uncover rank deficiency of network blocks and deep function coupling. By virtue of our numerical tools, we provide the first empirical analysis of the per-layer behavior of network rank in practical settings, i.e., ResNets, deep MLPs, and Transformers on ImageNet. These empirical results are in direct accord with our theory. Furthermore, we reveal a novel phenomenon of independence deficit caused by the rank deficiency of deep networks, where classification confidence of a given category can be linearly decided by the confidence of a handful of other categories. The theoretical results of this work, together with the empirical findings, may advance understanding of the inherent principles of deep neural networks.
Introduction
\label{sec:intro}
In mathematics, the rank of a smooth function measures the volume of independent information captured by the function <|cite_start|> (Reference: DIFFERENTIAL TOPOLOGY: ) <|cite_end|>. Deep neural networks are highly smooth functions, thus the rank of a network has long been an essential concept in machine learning that underlies many tasks such as information compression <|cite_start|> (Reference: PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning: Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors applied on model differences. Inspired by the PowerSGD algorithm for centralized deep learning, this algorithm uses power iteration steps to maximize the information transferred per bit. We prove that our method requires no additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.) <|cite_end|> <|cite_start|> (Reference: From Rank Estimation to Rank Approximation: Rank Residual Constraint for Image Restoration: In this paper, we propose a novel approach to the rank minimization problem, termed rank residual constraint (RRC) model. Different from existing low-rank based approaches, such as the well-known nuclear norm minimization (NNM) and the weighted nuclear norm minimization (WNNM), which estimate the underlying low-rank matrix directly from the corrupted observations, we progressively approximate the underlying low-rank matrix via minimizing the rank residual. Through integrating the image nonlocal self-similarity (NSS) prior with the proposed RRC model, we apply it to image restoration tasks, including image denoising and image compression artifacts reduction. Towards this end, we first obtain a good reference of the original image groups by using the image NSS prior, and then the rank residual of the image groups between this reference and the degraded image is minimized to achieve a better estimate to the desired image. In this manner, both the reference and the estimated image are updated gradually and jointly in each iteration. Based on the group-based sparse representation model, we further provide a theoretical analysis on the feasibility of the proposed RRC model. Experimental results demonstrate that the proposed RRC model outperforms many state-of-the-art schemes in both the objective and perceptual quality.) <|cite_end|> <|cite_start|> (Reference: Graph-based non-convex low-rank regularization for image compression artifact reduction: Block transform coded images usually suffer from annoying artifacts at low bit-rates, because of the independent quantization of DCT coefficients. Image prior models play an important role in compressed image reconstruction. Natural image patches in a small neighborhood of the high-dimensional image space usually exhibit an underlying sub-manifold structure. To model the distribution of signal, we extract sub-manifold structure as prior knowledge. We utilize graph Laplacian regularization to characterize the sub-manifold structure at patch level. And similar patches are exploited as samples to estimate distribution of a particular patch. Instead of using Euclidean distance as similarity metric, we propose to use graph-domain distance to measure the patch similarity. Then we perform low-rank regularization on the similar-patch group, and incorporate a non-convex $l_{p}$ penalty to surrogate matrix rank. Finally, an alternatively minimizing strategy is employed to solve the non-convex problem. Experimental results show that our proposed method is capable of achieving more accurate reconstruction than the state-of-the-art methods in both objective and perceptual qualities.) <|cite_end|> <|cite_start|> (Reference: Generalized Low Rank Approximations of Matrices: ) <|cite_end|> <|cite_start|> (Reference: Rank-R approximation of tensors using image-as-matrix representation: We present a novel multilinear algebra based approach for reduced dimensionality representation of image ensembles. We treat an image as a matrix, instead of a vector as in traditional dimensionality reduction techniques like PCA, and higher-dimensional data as a tensor. This helps exploit spatio-temporal redundancies with less information loss than image-as-vector methods. The challenges lie in the computational and memory requirements for large ensembles. Currently, there exists a rank-R approximation algorithm which, although applicable to any number of dimensions, is efficient for only low-rank approximations. For larger dimensionality reductions, the memory and time costs of this algorithm become prohibitive. We propose a novel algorithm, for rank-R approximations of third-order tensors, which is efficient for arbitrary R but for the important special case of 2D image ensembles, e.g. video. Both of these algorithms reduce redundancies present in all dimensions. Rank-R tensor approximation yields the most compact data representation among all known image-as-matrix methods. We evaluated the performance of our algorithm vs. other approaches on a number of datasets with the following two main results. First, for a fixed compression ratio, the proposed algorithm yields the best representation of image ensembles visually as well as in the least squares sense. Second, proposed representation gives the best performance for object classification.) <|cite_end|>, network pruning <|cite_start|> (Reference: HRank: Filter Pruning using High-Rank Feature Map: Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resource-limited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of 0.14% in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.) <|cite_end|> <|cite_start|> (Reference: On compressing deep models by low rank and sparse decomposition: Deep compression refers to removing the redundancy of parameters and feature maps for deep learning models. Low-rank approximation and pruning for sparse structures play a vital role in many compression works. However, weight filters tend to be both low-rank and sparse. Neglecting either part of these structure information in previous methods results in iteratively retraining, compromising accuracy, and low compression rates. Here we propose a unified framework integrating the low-rank and sparse decomposition of weight matrices with the feature map reconstructions. Our model includes methods like pruning connections as special cases, and is optimized by a fast SVD-free algorithm. It has been theoretically proven that, with a small sample, due to its generalizability, our model can well reconstruct the feature maps on both training and test data, which results in less compromising accuracy prior to the subsequent retraining. With such a warm start to retrain, the compression method always possesses several merits: (a) higher compression rates, (b) little loss of accuracy, and (c) fewer rounds to compress deep models. The experimental results on several popular models such as AlexNet, VGG-16, and GoogLeNet show that our model can significantly reduce the parameters for both convolutional and fully-connected layers. As a result, our model reduces the size of VGG-16 by 15×, better than other recent compression methods that use a single strategy.) <|cite_end|> <|cite_start|> (Reference: Is Pruning Compression?: Investigating Pruning Via Network Layer Similarity: Unstructured neural network pruning is an effective technique that can significantly reduce theoretical model size, computation demand and energy consumption of large neural networks without compromising accuracy. However, a number of fundamental questions about pruning are not answered yet. For example, do the pruned neural networks contain the same representations as the original network? Is pruning a compression or evolution process? Does pruning only work on trained neural networks? What is the role and value of the uncovered sparsity structure? In this paper, we strive to answer these questions by analyzing three unstructured pruning methods (magnitude based pruning, post-pruning re-initialization, and random sparse initialization). We conduct extensive experiments using the Singular Vector Canonical Correlation Analysis (SVCCA) tool to study and contrast layer representations of pruned and original ResNet, VGG, and ConvNet models. We have several interesting observations: 1) Pruned neural network models evolve to substantially different representations while still maintaining similar accuracy. 2) Initialized sparse models can achieve reasonably good accuracy compared to well-engineered pruning methods. 3) Sparsity structures discovered by pruning models are not inherently important or useful.) <|cite_end|> <|cite_start|> (Reference: Language model compression with weighted low-rank factorization: Factorizing a large matrix into small matrices is a popular strategy for model compression. Singular value decomposition (SVD) plays a vital role in this compression strategy, approximating a learned matrix with fewer parameters. However, SVD minimizes the squared error toward reconstructing the original matrix without gauging the importance of the parameters, potentially giving a larger reconstruction error for those who affect the task accuracy more. In other words, the optimization objective of SVD is not aligned with the trained model's task accuracy. We analyze this previously unexplored problem, make observations, and address it by introducing Fisher information to weigh the importance of parameters affecting the model prediction. This idea leads to our method: Fisher-Weighted SVD (FWSVD). Although the factorized matrices from our approach do not result in smaller reconstruction errors, we find that our resulting task accuracy is much closer to the original model's performance. We perform analysis with the transformer-based language models, showing our weighted SVD largely alleviates the mismatched optimization objectives and can maintain model performance with a higher compression rate. Our method can directly compress a task-specific model while achieving better performance than other compact model strategies requiring expensive model pre-training. Moreover, the evaluation of compressing an already compact model shows our method can further reduce 9% to 30% parameters with an insignificant impact on task accuracy.) <|cite_end|> <|cite_start|> (Reference: DRONE: Data-aware Low-rank Compression for Large NLP Models: The representations learned by large-scale NLP models such as BERT have been widely used in various tasks. However, the increasing model size of the pre-trained models also brings efficiency challenges, including inference speed and model size when deploying models on mobile devices. Specifically, most operations in BERT consist of matrix multiplications. These matrices are not low-rank and thus canonical matrix decompositions do not lead to efficient approximations. In this paper, we observe that the learned representation of each layer lies in a low-dimensional space. Based on this observation, we propose DRONE ( d ata-awa r e l o w-ra n k compr e ssion), a provably optimal low-rank decomposition of weight matrices, which has a simple closed form solution that can be efficiently computed. DRONE can be applied to both fully-connected and self-attention layers appearing in the BERT model. In addition to compressing standard models, our method can also be used on distilled BERT models to further improve the compression rate. Experimental results show that DRONE is able to improve both model size and inference speed with limited loss in accuracy. Specifically, DRONE alone achieves 1.92x speedup on the MRPC task with only 1.5 % loss in accuracy, and when DRONE is combined with distillation, it further achieves over 12.3x speedup on various natural language inference tasks.) <|cite_end|>, data mining <|cite_start|> (Reference: On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions: The low-rank regression model has been studied and applied to capture the underlying classes/tasks correlation patterns, such that the regression/classification results can be enhanced. In this paper, we will prove that the low-rank regression model is equivalent to doing linear regression in the linear discriminant analysis (LDA) subspace. Our new theory reveals the learning mechanism of low-rank regression, and shows that the low-rank structures exacted from classes/tasks are connected to the LDA projection results. Thus, the low-rank regression efficiently works for the high-dimensional data. Moreover, we will propose new discriminant low-rank ridge regression and sparse low-rank regression methods. Both of them are equivalent to doing regularized regression in the regularized LDA subspace. These new regularized objectives provide better data mining results than existing low-rank regression in both theoretical and empirical validations. We evaluate our discriminant low-rank regression methods by six benchmark datasets. In all empirical results, our discriminant low-rank models consistently show better results than the corresponding full-rank methods.) <|cite_end|> <|cite_start|> (Reference: Low Rank Modeling of Signed Networks: Trust networks, where people leave trust and distrust feedback, are becoming increasingly common. These networks may be regarded as signed graphs, where a positive edge weight captures the degree of trust while a negative edge weight captures the degree of distrust. Analysis of such signed networks has become an increasingly important research topic. One important analysis task is that of sign inference, i.e., infer unknown (or future) trust or distrust relationships given a partially observed signed network. Most state-of-the-art approaches consider the notion of structural balance in signed networks, building inference algorithms based on information about links, triads, and cycles in the network. In this paper, we first show that the notion of weak structural balance in signed networks naturally leads to a global low-rank model for the network. Under such a model, the sign inference problem can be formulated as a low-rank matrix completion problem. We show that we can perfectly recover missing relationships, under certain conditions, using state-of-the-art matrix completion algorithms. We also propose the use of a low-rank matrix factorization approach with generalized loss functions as a practical method for sign inference - this approach yields high accuracy while being scalable to large signed networks, for instance, we show that this analysis can be performed on a synthetic graph with 1.1 million nodes and 120 million edges in 10 minutes. We further show that the low-rank model can be used for other analysis tasks on signed networks, such as user segmentation through signed graph clustering, with theoretical guarantees. Experiments on synthetic as well as real data show that our low rank model substantially improves accuracy of sign inference as well as clustering. As an example, on the largest real dataset available to us (Epinions data with 130K nodes and 840K edges), our matrix factorization approach yields 94.6% accuracy on the sign inference task as compared to 90.8% accuracy using a state-of-the-art cycle-based method - moreover, our method runs in 40 seconds as compared to 10,000 seconds for the cycle-based method.) <|cite_end|> <|cite_start|> (Reference: LorSLIM: Low rank sparse linear methods for top-n recommendations: In this paper, we notice that sparse and low-rank structures arise in the context of many collaborative filtering applications where the underlying graphs have block-diagonal adjacency matrices. Therefore, we propose a novel Sparse and Low-Rank Linear Method (Lor SLIM) to capture such structures and apply this model to improve the accuracy of the Top-N recommendation. Precisely, a sparse and low-rank aggregation coefficient matrix W is learned from Lor SLIM by solving an l1-norm and nuclear norm regularized optimization problem. We also develop an efficient alternating augmented Lagrangian method (ADMM) to solve the optimization problem. A comprehensive set of experiments is conducted to evaluate the performance of Lor SLIM. The experimental results demonstrate the superior recommendation quality of the proposed algorithm in comparison with current state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Low-rank sparse feature selection for patient similarity learning: Comparing and identifying similar patients is a fundamental task in medical domains - an efficient technique can, for example, help doctors to track patient cohorts, compare the effectiveness of treatments, or predict medical outcomes. The goal of patient similarity learning is to derive a clinically meaningful measure to evaluate the similarity amongst patients represented by their key clinical indicators. However, it is challenging to learn such similarity, as medical data are usually high dimensional, heterogeneous, and complex. In addition, a desirable patient similarity is dependent on particular clinical settings, which implies supervised learning scheme is more useful in medical domains. To address these, in this paper we present a novel similarity learning approach formulated as the generalized Mahalanobis similarity function with pairwise constraints. Considering there always exists some features non-discriminative and contains redundant information, we encode a low-rank structure to our similarity function to perform feature selection. We evaluate the proposed model on both UCI benchmarks and a real clinical dataset for several medical tasks, including patient retrieval, classification, and cohort discovery. The results show that our similarity model significantly outperforms many state-of-the-art baselines, and is effective at removing noisy or redundant features.) <|cite_end|> <|cite_start|> (Reference: Robust low-rank tensor recovery: Models and algorithms: Robust tensor recovery plays an instrumental role in robustifying tensor decompositions for multilinear data analysis against outliers, gross corruptions, and missing values and has a diverse array of applications. In this paper, we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust principal component analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number o...) <|cite_end|> <|cite_start|> (Reference: Deep Low-Rank Subspace Clustering: This paper is concerned with developing a novel approach to tackle the problem of subspace clustering. The approach introduces a convolutional autoencoder-based architecture to generate low-rank representations (LRR) of input data which are proven to be very suitable for subspace clustering. We propose to insert a fully-connected linear layer and its transpose between the encoder and decoder to implicitly impose a rank constraint on the learned representations. We train this architecture by minimizing a standard deep subspace clustering loss function and then recover underlying subspaces by applying a variant of spectral clustering technique. Extensive experiments on benchmark datasets demonstrate that the proposed model can not only achieve very competitive clustering results using a relatively small network architecture but also can maintain its high level of performance across a wide range of LRRs. This implies that the model can be appropriately combined with the state-of-the-art subspace clustering architectures to produce more accurate results.) <|cite_end|>, computer vision <|cite_start|> (Reference: Learning structured low-rank representations for image classification: An approach to learn a structured low-rank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier. Experimental results demonstrate the effectiveness of our approach.) <|cite_end|> <|cite_start|> (Reference: Low-Rank sparse coding for image classification: In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.) <|cite_end|> <|cite_start|> (Reference: Low-rank Bilinear Pooling for Fine-Grained Classification: Pooling second-order local feature statistics to form a high-dimensional bilinear feature has been shown to achieve state-of-the-art performance on a variety of fine-grained classification tasks. To address the computational demands of high feature dimensionality, we propose to represent the covariance features as a matrix and apply a low-rank bilinear classifier. The resulting classifier can be evaluated without explicitly computing the bilinear feature map which allows for a large reduction in the compute time as well as decreasing the effective number of parameters to be learned. To further compress the model, we propose classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model, and three orders smaller than the standard bilinear CNN model.) <|cite_end|> <|cite_start|> (Reference: Semi-Supervised Low-Rank Mapping Learning for Multi-Label Classification: Multi-label problems arise in various domains including automatic multimedia data categorization, and have generated significant interest in computer vision and machine learning community. However, existing methods do not adequately address two key challenges: exploiting correlations between labels and making up for the lack of labeled data or even missing labels. In this paper, we proposed a semi-supervised low-rank mapping (SLRM) model to handle these two challenges. SLRM model takes advantage of the nuclear norm regularization on mapping to effectively capture the label correlations. Meanwhile, it introduces manifold regularizer on mapping to capture the intrinsic structure among data, which provides a good way to reduce the required labeled data with improving the classification performance. Furthermore, we designed an efficient algorithm to solve SLRM model based on alternating direction method of multipliers and thus it can efficiently deal with large-scale datasets. Experiments on four real-world multimedia datasets demonstrate that the proposed method can exploit the label correlations and obtain promising and better label prediction results than state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Deep Low-Rank Subspace Clustering: This paper is concerned with developing a novel approach to tackle the problem of subspace clustering. The approach introduces a convolutional autoencoder-based architecture to generate low-rank representations (LRR) of input data which are proven to be very suitable for subspace clustering. We propose to insert a fully-connected linear layer and its transpose between the encoder and decoder to implicitly impose a rank constraint on the learned representations. We train this architecture by minimizing a standard deep subspace clustering loss function and then recover underlying subspaces by applying a variant of spectral clustering technique. Extensive experiments on benchmark datasets demonstrate that the proposed model can not only achieve very competitive clustering results using a relatively small network architecture but also can maintain its high level of performance across a wide range of LRRs. This implies that the model can be appropriately combined with the state-of-the-art subspace clustering architectures to produce more accurate results.) <|cite_end|> <|cite_start|> (Reference: Sparse Principal Component Analysis: Principal component analysis (PCA) is widely used in data processing and dimensionality reduction. However, PCA suffers from the fact that each principal component is a linear combination of all the original variables, thus it is often difficult to interpret the results. We introduce a new method called sparse principal component analysis (SPCA) using the lasso (elastic net) to produce modified principal components with sparse loadings. We first show that PCA can be formulated as a regression-type optimization problem; sparse loadings are then obtained by imposing the lasso (elastic net) constraint on the regression coefficients. Efficient algorithms are proposed to fit our SPCA models for both regular multivariate data and gene expression arrays. We also give a new formula to compute the total variance of modified principal components. As illustrations, SPCA is applied to real and simulated data with encouraging results.) <|cite_end|>, and natural language processing <|cite_start|> (Reference: GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking: Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. As a case study, a state-of-the-art neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves state-of- the-art performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity.) <|cite_end|> <|cite_start|> (Reference: Compacter: Efficient Low-Rank Hypercomplex Adapter Layers: Adapting large-scale pretrained language models to downstream tasks via fine-tuning is the standard method for achieving state-of-the-art performance on NLP benchmarks. However, fine-tuning all weights of models with millions or billions of parameters is sample-inefficient, unstable in low-resource settings, and wasteful as it requires storing a separate copy of the model for each task. Recent work has developed parameter-efficient fine-tuning methods, but these approaches either still require a relatively large number of parameters or underperform standard fine-tuning. In this work, we propose Compacter, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work. Compacter accomplishes this by building on top of ideas from adapters, low-rank optimization, and parameterized hypercomplex multiplication layers. Specifically, Compacter inserts task-specific weight matrices into a pretrained model's weights, which are computed efficiently as a sum of Kronecker products between shared "slow" weights and "fast" rank-one matrices defined per Compacter layer. By only training 0.047% of a pretrained model's parameters, Compacter performs on par with standard fine-tuning on GLUE and outperforms standard fine-tuning on SuperGLUE and low-resource settings. Our code is publicly available at~\url{https://github.com/rabeehk/compacter}.) <|cite_end|> <|cite_start|> (Reference: Scatterbrain: Unifying Sparse and Low-rank Attention Approximation: Recent advances in efficient Transformers have exploited either the sparsity or low-rank properties of attention matrices to reduce the computational and memory bottlenecks of modeling long sequences. However, it is still challenging to balance the trade-off between model quality and efficiency to perform a one-size-fits-all approximation for different tasks. To better understand this trade-off, we observe that sparse and low-rank approximations excel in different regimes, determined by the softmax temperature in attention, and sparse + low-rank can outperform each individually. Inspired by the classical robust-PCA algorithm for sparse and low-rank decomposition, we propose Scatterbrain, a novel way to unify sparse (via locality sensitive hashing) and low-rank (via kernel feature map) attention for accurate and efficient approximation. The estimation is unbiased with provably low error. We empirically show that Scatterbrain can achieve 2.1x lower error than baselines when serving as a drop-in replacement in BigGAN image generation and pre-trained T2T-ViT. On a pre-trained T2T Vision transformer, even without fine-tuning, Scatterbrain can reduce 98% of attention memory at the cost of only 1% drop in accuracy. We demonstrate Scatterbrain for end-to-end training with up to 4 points better perplexity and 5 points better average accuracy than sparse or low-rank efficient transformers on language modeling and long-range-arena tasks.) <|cite_end|> <|cite_start|> (Reference: Low-Rank Constraints for Fast Inference in Structured Models: Structured distributions, i.e. distributions over combinatorial spaces, are commonly used to learn latent probabilistic representations from observed data. However, scaling these models is bottlenecked by the high computational and memory complexity with respect to the size of the latent representations. Common models such as Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) require time and space quadratic and cubic in the number of hidden states respectively. This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models. We show that by viewing the central inference step as a matrix-vector product and using a low-rank constraint, we can trade off model expressivity and speed via the rank. Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces while providing practical speedups.) <|cite_end|>. Numerous methods are either designed to utilize the mathematical property of network ranks, or are derived from an assumption that low-rank structures are to be preferred.
Yet a rigorous investigation to the behavior of rank of general networks, combining both theoretical and empirical arguments, is still absent in current research, weakening our confidence in the being able to predict performance. To the best of our knowledge, there are only a few previous works discussing the rank behavior of specific network architectures, like attention blocks <|cite_start|> (Reference: Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth: Attention-based architectures have become ubiquitous in machine learning, yet our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms, each involving the operation of a sequence of attention heads across layers. Using this decomposition, we prove that self-attention possesses a strong inductive bias towards "token uniformity". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the identified convergence phenomena on different variants of standard transformer architectures.) <|cite_end|> and BatchNorms <|cite_start|> (Reference: Batch normalization provably avoids ranks collapse for randomly initialised deep networks: Randomly initialized neural networks are known to become harder to train with increasing depth, unless architectural enhancements like residual connections and batch normalization are used. We here investigate this phenomenon by revisiting the connection between random initialization in deep networks and spectral instabilities in products of random matrices. Given the rich literature on random matrices, it is not surprising to find that the rank of the intermediate representations in unnormalized networks collapses quickly with depth. In this work we highlight the fact that batch normalization is an effective strategy to avoid rank collapse for both linear and ReLU networks. Leveraging tools from Markov chain theory, we derive a meaningful lower rank bound in deep linear networks. Empirically, we also demonstrate that this rank robustness generalizes to ReLU nets. Finally, we conduct an extensive set of experiments on real-world data sets, which confirm that rank stability is indeed a crucial condition for training modern-day deep neural architectures.) <|cite_end|> <|cite_start|> (Reference: Understanding Batch Normalization: Batch normalization (BN) is a technique to normalize activations in intermediate layers of deep neural networks. Its tendency to improve accuracy and speed up training have established BN as a favorite technique in deep learning. Yet, despite its enormous success, there remains little consensus on the exact reason and mechanism behind these improvements. In this paper we take a step towards a better understanding of BN, following an empirical approach. We conduct several experiments, and show that BN primarily enables training with larger learning rates, which is the cause for faster convergence and better generalization. For networks without BN we demonstrate how large gradient updates can result in diverging loss and activations growing uncontrollably with network depth, which limits possible learning rates. BN avoids this problem by constantly correcting activations to be zero-mean and of unit standard deviation, which enables larger gradient steps, yields faster convergence and may help bypass sharp local minima. We further show various ways in which gradients and activations of deep unnormalized networks are ill-behaved. We contrast our results against recent findings in random matrix theory, shedding new light on classical initialization schemes and their consequences.) <|cite_end|> in pure MLP structures. The empirical validation of those methods are also limited to shallow networks, specific architectures, or merely the final layers of deep networks, leaving the global behavior of general deep neural networks mysterious due to prohibitive space-time complexity for measuring them.
Rigorous work on network rank that combines both strong theoretical and empirical evidence would have significant implications.
In this paper, we make several contributions towards this challenging goal. We find that the two essential ingredients of deep learning, the chain rules of differential operators and matrix multiplications, are enough to establish a universal principle---that network rank decreases monotonically with the depth of networks. Two factors further enhance the speed of decreasing: a) the explicit rank deficiency of many frequently used network modules, and b) an intrinsic potential of spectrum centralization enforced by the nature of coupling of massive composite functions. To empirically validate our theory, we design numerical tools to efficiently and economically examine the rank behavior of deep neural networks. This is a non-trivial task, as rank is very sensitive to noise and perturbation, and computing ranks of large networks is computationally prohibitive in time and space. Finally, we uncover an interesting phenomenon of independence deficit in multi-class classification networks. We find that many classes do not have their own unique representations in the classification network, and some highly irrelevant classes can decide the outputs of others. This independence deficit can significantly deteriorate the performance of networks in generalized data domains where each class demands a unique representation. In conclusion, the results of this work, together with the numerical tools we invent, may advance understanding of intrinsic properties of deep neural networks, and provide foundations for a broad study of low-dimensional structures in machine learning.
Related Work
Previous studies of rank deficiency in deep neural networks follow two parallel clues. One is the study of rank behavior in specific neural network architectures. <|cite_start|> (Reference: Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth: Attention-based architectures have become ubiquitous in machine learning, yet our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms, each involving the operation of a sequence of attention heads across layers. Using this decomposition, we prove that self-attention possesses a strong inductive bias towards "token uniformity". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the identified convergence phenomena on different variants of standard transformer architectures.) <|cite_end|> studies deep networks consisting of pure self-attention networks, and proves that they converge exponentially to a rank-1 matrix under the assumption of globally bounded weight matrices. <|cite_start|> (Reference: Batch normalization provably avoids ranks collapse for randomly initialised deep networks: Randomly initialized neural networks are known to become harder to train with increasing depth, unless architectural enhancements like residual connections and batch normalization are used. We here investigate this phenomenon by revisiting the connection between random initialization in deep networks and spectral instabilities in products of random matrices. Given the rich literature on random matrices, it is not surprising to find that the rank of the intermediate representations in unnormalized networks collapses quickly with depth. In this work we highlight the fact that batch normalization is an effective strategy to avoid rank collapse for both linear and ReLU networks. Leveraging tools from Markov chain theory, we derive a meaningful lower rank bound in deep linear networks. Empirically, we also demonstrate that this rank robustness generalizes to ReLU nets. Finally, we conduct an extensive set of experiments on real-world data sets, which confirm that rank stability is indeed a crucial condition for training modern-day deep neural architectures.) <|cite_end|> studies the effect of BatchNorm on MLPs and shows that BatchNorm can prevent drastic diminishing of network ranks in some small networks and datasets. Both of those works avoid directly validating the behavior of network ranks in intermediate layers due to the lacking of efficient numerical tools. An independent clue is the study of implicit self-regularization, which finds that weight matrices tend to lose ranks after training. <|cite_start|> (Reference: Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning: Random Matrix Theory (RMT) is applied to analyze weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of Self-Regularization. The empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of explicit regularization. Building on relatively recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. These phases can be observed during the training process as well as in the final learned DNNs. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a "size scale" separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems. This results from correlations arising at all size scales, which arises implicitly due to the training process itself. This implicit Self-Regularization can depend strongly on the many knobs of the training process. By exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. This demonstrates that---all else being equal---DNN optimization with larger batch sizes leads to less-well implicitly-regularized models, and it provides an explanation for the generalization gap phenomena.) <|cite_end|> studies this phenomenon in infinitely-wide, over-parametric neural networks with tools from random matrix theory. <|cite_start|> (Reference: Implicit Regularization in Deep Matrix Factorization: Efforts to understand the generalization mystery in deep learning have led to the belief that gradient-based optimization induces a form of implicit regularization, a bias towards models of low "complexity." We study the implicit regularization of gradient descent over deep linear neural networks for matrix completion and sensing, a model referred to as deep matrix factorization. Our first finding, supported by theory and experiments, is that adding depth to a matrix factorization enhances an implicit tendency towards low-rank solutions, oftentimes leading to more accurate recovery. Secondly, we present theoretical and empirical arguments questioning a nascent view by which implicit regularization in matrix factorization can be captured using simple mathematical norms. Our results point to the possibility that the language of standard regularizers may not be rich enough to fully encompass the implicit regularization brought forth by gradient-based optimization.) <|cite_end|> studies this phenomenon in deep matrix decomposition. Those works focus on the theoretical behavior of rank of weight matrices induced by the training instead of network ranks. <|paper_end|> | [
"<|reference_start|> DRONE: Data-aware Low-rank Compression for Large NLP Models: The representations learned by large-scale NLP models such as BERT have been widely used in various tasks. However, the increasing model size of the pre-trained models also brings efficiency challenges, including inference speed and model size when deploying models on mobile devices. Specifically, most operations in BERT consist of matrix multiplications. These matrices are not low-rank and thus canonical matrix decompositions do not lead to efficient approximations. In this paper, we observe that the learned representation of each layer lies in a low-dimensional space. Based on this observation, we propose DRONE ( d ata-awa r e l o w-ra n k compr e ssion), a provably optimal low-rank decomposition of weight matrices, which has a simple closed form solution that can be efficiently computed. DRONE can be applied to both fully-connected and self-attention layers appearing in the BERT model. In addition to compressing standard models, our method can also be used on distilled BERT models to further improve the compression rate. Experimental results show that DRONE is able to improve both model size and inference speed with limited loss in accuracy. Specifically, DRONE alone achieves 1.92x speedup on the MRPC task with only 1.5 % loss in accuracy, and when DRONE is combined with distillation, it further achieves over 12.3x speedup on various natural language inference tasks. <|reference_end|>",
"<|reference_start|> GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking: Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. As a case study, a state-of-the-art neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves state-of- the-art performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity. <|reference_end|>",
"<|reference_start|> Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth: Attention-based architectures have become ubiquitous in machine learning, yet our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms, each involving the operation of a sequence of attention heads across layers. Using this decomposition, we prove that self-attention possesses a strong inductive bias towards \"token uniformity\". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the identified convergence phenomena on different variants of standard transformer architectures. <|reference_end|>",
"<|reference_start|> Understanding Batch Normalization: Batch normalization (BN) is a technique to normalize activations in intermediate layers of deep neural networks. Its tendency to improve accuracy and speed up training have established BN as a favorite technique in deep learning. Yet, despite its enormous success, there remains little consensus on the exact reason and mechanism behind these improvements. In this paper we take a step towards a better understanding of BN, following an empirical approach. We conduct several experiments, and show that BN primarily enables training with larger learning rates, which is the cause for faster convergence and better generalization. For networks without BN we demonstrate how large gradient updates can result in diverging loss and activations growing uncontrollably with network depth, which limits possible learning rates. BN avoids this problem by constantly correcting activations to be zero-mean and of unit standard deviation, which enables larger gradient steps, yields faster convergence and may help bypass sharp local minima. We further show various ways in which gradients and activations of deep unnormalized networks are ill-behaved. We contrast our results against recent findings in random matrix theory, shedding new light on classical initialization schemes and their consequences. <|reference_end|>"
] | [
10,
23,
27,
29
] | {"<|cite_1|>": "ss-1978783", "<|multi_cite_2_1|>": "ss-1279695", "<|multi_cite_2_2|>": "arxiv-165028", "<|multi_cite_2_3|>": "ss-1175757", "<|multi_cite_2_4|>": "ss-1289999", "<|multi_cite_2_5|>": "ss-1687566", "<|multi_cite_3_1|>": "arxiv-250018", "<|multi_cite_3_2|>": "ss-1034343", "<|multi_cite_3_3|>": "ss-1307852", "<|multi_cite_3_4|>": "arxiv-430820", "<|multi_cite_3_5|>": "ss-840230", "<|multi_cite_4_1|>": "ss-2587564", "<|multi_cite_4_2|>": "ss-1306664", "<|multi_cite_4_3|>": "ss-1996545", "<|multi_cite_4_4|>": "ss-2587565", "<|multi_cite_4_5|>": "ss-1015222", "<|multi_cite_4_6|>": "ss-910788", "<|multi_cite_5_1|>": "ss-2587566", "<|multi_cite_5_2|>": "ss-1006196", "<|multi_cite_5_3|>": "arxiv-110210", "<|multi_cite_5_4|>": "ss-1009141", "<|multi_cite_5_5|>": "ss-910788", "<|multi_cite_5_6|>": "ss-1007868", "<|multi_cite_6_1|>": "arxiv-162936", "<|multi_cite_6_2|>": "arxiv-346844", "<|multi_cite_6_3|>": "ss-1360554", "<|multi_cite_6_4|>": "arxiv-391593", "<|cite_7|>": "arxiv-325412", "<|multi_cite_8_1|>": "ss-1347150", "<|multi_cite_8_2|>": "arxiv-161526", "<|cite_9|>": "arxiv-325412", "<|cite_10|>": "ss-1347150", "<|cite_11|>": "arxiv-174771", "<|cite_12|>": "arxiv-207237"} |
2306.04823 | <|paper_start|> Title: Data Augmentation for Improving Tail-traffic Robustness in Skill-routing for Dialogue Systems
Abstract: Data Augmentation for Improving Tail-traffic Robustness in Skill-routing for Dialogue Systems: Large-scale conversational systems typically rely on a skill-routing component to route a user request to an appropriate skill and interpretation to serve the request. In such system, the agent is responsible for serving thousands of skills and interpretations which create a long-tail distribution due to the natural frequency of requests. For example, the samples related to play music might be a thousand times more frequent than those asking for theatre show times. Moreover, inputs used for ML-based skill routing are often a heterogeneous mix of strings, embedding vectors, categorical and scalar features which makes employing augmentation-based long-tail learning approaches challenging. To improve the skill-routing robustness, we propose an augmentation of heterogeneous skill-routing data and training targeted for robust operation in long-tail data regimes. We explore a variety of conditional encoder-decoder generative frameworks to perturb original data fields and create synthetic training data. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments using real-world data from a commercial conversational system. Based on the experiment results, the proposed approach improves more than 80% (51 out of 63) of intents with less than 10K of traffic instances in the skill-routing replication task.
Introduction
Recent large-scale conversational systems such as Amazon Alexa, Apple Siri, Google Assistant, and Microsoft Cortana have shown great promise toward natural human-machine interactions <|cite_start|> (Reference: The technology behind personal digital assistants: an overview of the system architecture and key components: We have long envisioned that one day computers will understand natural language and anticipate what we need, when and where we need it, and proactively complete tasks on our behalf. As computers get smaller and more pervasive, how humans interact with them is becoming a crucial issue. Despite numerous attempts over the past 30 years to make language understanding (LU) an effective and robust natural user interface for computer interaction, success has been limited and scoped to applications that were not particularly central to everyday use. However, speech recognition and machine learning have continued to be refined, and structured data served by applications and content providers has emerged. These advances, along with increased computational power, have broadened the application of natural LU to a wide spectrum of everyday tasks that are central to a user's productivity. We believe that as computers become smaller and more ubiquitous [e.g., wearables and Internet of Things (IoT)], and the number of applications increases, both system-initiated and user-initiated task completion across various applications and web services will become indispensable for personal life management and work productivity. In this article, we give an overview of personal digital assistants (PDAs); describe the system architecture, key components, and technology behind them; and discuss their future potential to fully redefine human?computer interaction.) <|cite_end|>. Such systems often involve multiple ML-based components to fulfill user requests.
Components such as Automated Speech Recognition (ASR) to transcribe the request, Natural Language Understanding (NLU) to assign a user's utterance to a set of potential interpretations i.e. domains, intent, and parse sentence entities.
Then, based on the NLU interpretations and other contextual signals (e.g. device type), a skill routing component is to select the best NLU interpretation and route the request to an appropriate skill.
Self-learning based on customer satisfaction metrics is the state-of-the-art method for the skill routing problem. Typically, the skill routing problem is cast as a contextual bandit to optimize a reward signal generated by ML-based customer satisfaction estimators. While such self-learning approach is promising in terms of scalability, in a commercial system with thousands of skills/intents creating a long-tail distribution and considering the disparities in estimation quality for the customer satisfaction signals for different traffic segments, it is often challenging to maintain routing quality for entire traffic by solely relying on bandit learning objective. To address such issues, current self-learning methods rely on replication objectives to ensure policy robustness across off-policy bandit updates <|cite_start|> (Reference: Scalable and Robust Self-Learning for Skill Routing in Large-Scale Conversational AI Systems: Skill routing is an important component in large-scale conversational systems. In contrast to traditional rule-based skill routing, state-of-the-art systems use a model-based approach to enable natural conversations. To provide supervision signal required to train such models, ideas such as human annotation, replication of a rule-based system, relabeling based on user paraphrases, and bandit-based learning were suggested. However, these approaches: (a) do not scale in terms of the number of skills and skill on-boarding, (b) require a very costly expert annotation/rule-design, (c) introduce risks in the user experience with each model update. In this paper, we present a scalable self-learning approach to explore routing alternatives without causing abrupt policy changes that break the user experience, learn from the user interaction, and incrementally improve the routing via frequent model refreshes. To enable such robust frequent model updates, we suggest a simple and effective approach that ensures controlled policy updates for individual domains, followed by an off-policy evaluation for making deployment decisions without any need for lengthy A/B experimentation. We conduct various offline and online A/B experiments on a commercial large-scale conversational system to demonstrate the effectiveness of the proposed method in real-world production settings.) <|cite_end|> <|cite_start|> (Reference: Constrained Policy Optimization for Controlled Self-Learning in Conversational AI Systems: Recently, self-learning methods based on user satisfaction metrics and contextual bandits have shown promising results to enable consistent improvements in conversational AI systems. However, directly targeting such metrics by off-policy bandit learning objectives often increases the risk of making abrupt policy changes that break the current user experience. In this study, we introduce a scalable framework for supporting fine-grained exploration targets for individual domains via user-defined constraints. For example, we may want to ensure fewer policy deviations in business-critical domains such as shopping, while allocating more exploration budget to domains such as music. Furthermore, we present a novel meta-gradient learning approach that is scalable and practical to address this problem. The proposed method adjusts constraint violation penalty terms adaptively through a meta objective that encourages balanced constraint satisfaction across domains. We conduct extensive experiments using data from a real-world conversational AI on a set of realistic constraint benchmarks. Based on the experimental results, we demonstrate that the proposed approach is capable of achieving the best balance between the policy value and constraint satisfaction rate.) <|cite_end|>.
In this work, we attempt to enhance the robustness of skill routing systems by augmenting the low-appearance domain and intent data subsets.
Typically, for a certain request, we are given a set of routing candidates represented as hypotheses each composed of an ASR transcribed text as well as other categorical data fields such as NLU interpretations, device type, device status, and the proposed skill.
A model-based skill routing system is trained by replicating the ideal skill-routing decisions represented through a (large) training set of such requests and their contextual signals coupled with the correct corresponding skill for routing.
However, in practice such datasets exhibit an imbalanced traffic between common requests and tail requests, leading to low replication accuracy and robustness in these domains. Therefore, we are interested in augmenting training data for such low-count segments.
However augmenting such \emph{heterogeneous} data is non-trivial. Most natural language processing tasks with only text inputs focus on pure text augmentation by perturbing original texts at token spans <|cite_start|> (Reference: Simple Data Augmentation for Multilingual NLU in Task Oriented Dialogue Systems: Data augmentation has shown potential in alleviating data scarcity for Natural Language Understanding (e.g. slot filling and intent classification) in task-oriented dialogue systems. As prior work has been mostly experimented on English datasets, we focus on five different languages, and consider a setting where limited data are available. We investigate the effectiveness of non-gradient based augmentation methods, involving simple text span substitutions and syntactic manipulations. Our experiments show that (i) augmentation is effective in all cases, particularly for slot filling; and (ii) it is beneficial for a joint intent-slot model based on multilingual BERT, both for limited data settings and when full training data is used.) <|cite_end|> <|cite_start|> (Reference: Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning: We introduce EfficientCL, a memory-efficient continual pretraining method that applies contrastive learning with novel data augmentation and curriculum learning. For data augmentation, we stack two types of operation sequentially: cutoff and PCA jittering. While pretraining steps proceed, we apply curriculum learning by incrementing the augmentation degree for each difficulty step. After data augmentation is finished, contrastive learning is applied on projected embeddings of original and augmented examples. When finetuned on GLUE benchmark, our model outperforms baseline models, especially for sentence-level tasks. Additionally, this improvement is capable with only 70% of computational memory compared to the baseline model.) <|cite_end|> <|cite_start|> (Reference: Conditional BERT Contextual Augmentation: We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\footnote{The term "conditional masked language model" appeared once in original BERT paper, which indicates context-conditional, is equivalent to term "masked language model". In our paper, "conditional masked language model" indicates we apply extra label-conditional constraint to the "masked language model".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement.) <|cite_end|> or entire sentences <|cite_start|> (Reference: Improving Robustness of Task Oriented Dialog Systems: Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results.) <|cite_end|> <|cite_start|> (Reference: GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation: Practical dialogue systems require robust methods of detecting out-of-scope (OOS) utterances to avoid conversational breakdowns and related failure modes. Directly training a model with labeled OOS examples yields reasonable performance, but obtaining such data is a resource-intensive process. To tackle this limited-data problem, previous methods focus on better modeling the distribution of in-scope (INS) examples. We introduce GOLD as an orthogonal technique that augments existing data to train better OOS detectors operating in low-data regimes. GOLD generates pseudo-labeled candidates using samples from an auxiliary dataset and keeps only the most beneficial candidates for training through a novel filtering mechanism. In experiments across three target benchmarks, the top GOLD model outperforms all existing methods on all key metrics, achieving relative gains of 52.4%, 48.9% and 50.3% against median baseline performance. We also analyze the unique properties of OOS data to identify key factors for optimally applying our proposed method.) <|cite_end|>. Several works also leverage paraphrasing techniques <|cite_start|> (Reference: Improving Robustness of Task Oriented Dialog Systems: Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results.) <|cite_end|> <|cite_start|> (Reference: Paraphrase generation for semi-supervised learning in nlu: Semi-supervised learning is an efficient way to improve performance for natural language processing systems. In this work, we propose Para-SSL, a scheme to generate candidate utterances using paraphrasing and methods from semi-supervised learning. In order to perform paraphrase generation in the context of a dialog system, we automatically extract paraphrase pairs to create a paraphrase corpus. Using this data, we build a paraphrase generation system and perform one-to-many generation, followed by a validation step to select only the utterances with good quality. The paraphrase-based semi-supervised learning is applied to five functionalities in a natural language understanding system. Our proposed method for semi-supervised learning using paraphrase generation does not require user utterances and can be applied prior to releasing a new functionality to a system. Experiments show that we can achieve up to 19% of relative slot error reduction without an access to user utterances, and up to 35% when leveraging live traffic utterances.) <|cite_end|> <|cite_start|> (Reference: Paraphrase Augmented Task-Oriented Dialog Generation: Neural generative models have achieved promising performance on dialog generation tasks if given a huge data set. However, the lack of high-quality dialog data and the expensive data annotation process greatly limit their application in real-world settings. We propose a paraphrase augmented response generation (PARG) framework that jointly trains a paraphrase model and a response generation model to improve the dialog generation performance. We also design a method to automatically construct paraphrase training data set based on dialog state and dialog act labels. PARG is applicable to various dialog generation models, such as TSCP (Lei et al., 2018) and DAMD (Zhang et al., 2019). Experimental results show that the proposed framework improves these state-of-the-art dialog models further on CamRest676 and MultiWOZ. PARG also significantly outperforms other data augmentation methods in dialog generation tasks, especially under low resource settings.) <|cite_end|> to enhance the robustness of task-oriented dialog systems.
However, manual paraphrasing dataset preparation with intense laboring is required especially for tail requests.
Conditional generative approaches <|cite_start|> (Reference: Data Augmentation for Spoken Language Understanding via Joint Variational Generation: Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SLU) due to the high cost of creating manually tagged SLU datasets. Recent works in neural text generative models, particularly latent variable models such as variational autoencoder (VAE), have shown promising results in regards to generating plausible and natural sentences. In this paper, we propose a novel generative architecture which leverages the generative power of latent variable models to jointly synthesize fully annotated utterances. Our experiments show that existing SLU models trained on the additional synthetic examples achieve performance gains. Our approach not only helps alleviate the data scarcity issue in the SLU task for many datasets but also indiscriminately improves language understanding performances for various SLU models, supported by extensive experiments and rigorous statistical testing.) <|cite_end|> <|cite_start|> (Reference: Data Augmentation for Spoken Language Understanding via Pretrained Language Models: The training of spoken language understanding (SLU) models often faces the problem of data scarcity. In this paper, we put forward a data augmentation method using pretrained language models to boost the variability and accuracy of generated utterances. Furthermore, we investigate and propose solutions to two previously overlooked semi-supervised learning scenarios of data scarcity in SLU: i) Rich-in-Ontology: ontology information with numerous valid dialogue acts is given; ii) Rich-in-Utterance: a large number of unlabelled utterances are available. Empirical results show that our method can produce synthetic training data that boosts the performance of language understanding models in various scenarios.) <|cite_end|> <|cite_start|> (Reference: Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders: Conditional Text Generation has drawn much attention as a topic of Natural Language Generation (NLG) which provides the possibility for humans to control the properties of generated contents. Current conditional generation models cannot handle emerging conditions due to their joint end-to-end learning fashion. When a new condition added, these techniques require full retraining. In this paper, we present a new framework named Pre-train and Plug-in Variational Auto-Encoder (PPVAE) towards flexible conditional text generation. PPVAE decouples the text generation module from the condition representation module to allow "one-to-many" conditional generation. When a fresh condition emerges, only a lightweight network needs to be trained and works as a plug-in for PPVAE, which is efficient and desirable for real-world applications. Extensive experiments demonstrate the superiority of PPVAE against the existing alternatives with better conditionality and diversity but less training effort.) <|cite_end|> <|cite_start|> (Reference: CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot Intent Detection: In this paper, we formulate a more realistic and difficult problem setup for the intent detection task in natural language understanding, namely Generalized Few-Shot Intent Detection (GFSID). GFSID aims to discriminate a joint label space consisting of both existing intents which have enough labeled data and novel intents which only have a few examples for each class. To approach this problem, we propose a novel model, Conditional Text Generation with BERT (CG-BERT). CG-BERT effectively leverages a large pre-trained language model to generate text conditioned on the intent label. By modeling the utterance distribution with variational inference, CG-BERT can generate diverse utterances for the novel intents even with only a few utterances available. Experimental results show that CG-BERT achieves state-of-the-art performance on the GFSID task with 1-shot and 5-shot settings on two real-world datasets.) <|cite_end|> <|cite_start|> (Reference: Controlled Text Generation for Data Augmentation in Intelligent Artificial Agents: Data availability is a bottleneck during early stages of development of new capabilities for intelligent artificial agents. We investigate the use of text generation techniques to augment the training data of a popular commercial artificial agent across categories of functionality, with the goal of faster development of new functionality. We explore a variety of encoder-decoder generative models for synthetic training data generation and propose using conditional variational auto-encoders. Our approach requires only direct optimization, works well with limited data and significantly outperforms the previous controlled text generation techniques. Further, the generated data are used as additional training samples in an extrinsic intent classification task, leading to improved performance by up to 5\% absolute f-score in low-resource cases, validating the usefulness of our approach.) <|cite_end|> <|cite_start|> (Reference: Structured Attention for Unsupervised Dialogue Structure Induction: Inducing a meaningful structural representation from one or a set of dialogues is a crucial but challenging task in computational linguistics. Advancement made in this area is critical for dialogue system design and discourse analysis. It can also be extended to solve grammatical inference. In this work, we propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion. Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias. Experiments show that on two-party dialogue datasets, VRNN with structured attention learns semantic structures that are similar to templates used to generate this dialogue corpus. While on multi-party dialogue datasets, our model learns an interactive structure demonstrating its capability of distinguishing speakers or addresses, automatically disentangling dialogues without explicit human annotation.) <|cite_end|> instead provide flexible solutions of modeling text distribution that introduces variability yet preserves top-level semantics, which is ideal for labor-free data augmentation.
Nevertheless, modeling such distributions remains challenging and mostly unexplored in the research community, especially in the context of dialogue systems. In this paper, we explore the idea of generative data augmentation based on variational autoencoders (VAE) and transformer architectures to generate samples from the conditional distribution of skill routing hypothesis.
The main contributions of this paper are as follows:
\vspace{-7pt}
\begin{enumerate}[leftmargin=*]
\setlength\itemsep{-0.5em}
\item Introducing a data augmentation framework for generating heterogeneous features available in conversational assistant systems.
\item Enriching the training set and enhancing the robustness of skill routing models by leveraging the data augmenters applied on perturbed samples.
\item Conducting extensive experiments using data from a real-world conversational system to demonstrate the impact of the proposed methods on various metrics for routing robustness and generation quality.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/base_model.png}
\caption{An overview of the base skill-routing system.
}
\label{fig:base_model}
\end{figure} <|paper_end|> | [
"<|reference_start|> Simple Data Augmentation for Multilingual NLU in Task Oriented Dialogue Systems: Data augmentation has shown potential in alleviating data scarcity for Natural Language Understanding (e.g. slot filling and intent classification) in task-oriented dialogue systems. As prior work has been mostly experimented on English datasets, we focus on five different languages, and consider a setting where limited data are available. We investigate the effectiveness of non-gradient based augmentation methods, involving simple text span substitutions and syntactic manipulations. Our experiments show that (i) augmentation is effective in all cases, particularly for slot filling; and (ii) it is beneficial for a joint intent-slot model based on multilingual BERT, both for limited data settings and when full training data is used. <|reference_end|>",
"<|reference_start|> Improving Robustness of Task Oriented Dialog Systems: Task oriented language understanding in dialog systems is often modeled using intents (task of a query) and slots (parameters for that task). Intent detection and slot tagging are, in turn, modeled using sentence classification and word tagging techniques respectively. Similar to adversarial attack problems with computer vision models discussed in existing literature, these intent-slot tagging models are often over-sensitive to small variations in input -- predicting different and often incorrect labels when small changes are made to a query, thus reducing their accuracy and reliability. However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e.g. adding `noise') to a query sometimes changes the meaning and thus labels of a query. In this paper, we first describe how to create an adversarial test set to measure the robustness of these models. Furthermore, we introduce and adapt adversarial training methods as well as data augmentation using back-translation to mitigate these issues. Our experiments show that both techniques improve the robustness of the system substantially and can be combined to yield the best results. <|reference_end|>",
"<|reference_start|> GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation: Practical dialogue systems require robust methods of detecting out-of-scope (OOS) utterances to avoid conversational breakdowns and related failure modes. Directly training a model with labeled OOS examples yields reasonable performance, but obtaining such data is a resource-intensive process. To tackle this limited-data problem, previous methods focus on better modeling the distribution of in-scope (INS) examples. We introduce GOLD as an orthogonal technique that augments existing data to train better OOS detectors operating in low-data regimes. GOLD generates pseudo-labeled candidates using samples from an auxiliary dataset and keeps only the most beneficial candidates for training through a novel filtering mechanism. In experiments across three target benchmarks, the top GOLD model outperforms all existing methods on all key metrics, achieving relative gains of 52.4%, 48.9% and 50.3% against median baseline performance. We also analyze the unique properties of OOS data to identify key factors for optimally applying our proposed method. <|reference_end|>",
"<|reference_start|> Data Augmentation for Spoken Language Understanding via Joint Variational Generation: Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SLU) due to the high cost of creating manually tagged SLU datasets. Recent works in neural text generative models, particularly latent variable models such as variational autoencoder (VAE), have shown promising results in regards to generating plausible and natural sentences. In this paper, we propose a novel generative architecture which leverages the generative power of latent variable models to jointly synthesize fully annotated utterances. Our experiments show that existing SLU models trained on the additional synthetic examples achieve performance gains. Our approach not only helps alleviate the data scarcity issue in the SLU task for many datasets but also indiscriminately improves language understanding performances for various SLU models, supported by extensive experiments and rigorous statistical testing. <|reference_end|>"
] | [
3,
6,
7,
11
] | {"<|cite_1|>": "ss-1374984", "<|multi_cite_2_1|>": "ss-2400213", "<|multi_cite_2_2|>": "arxiv-446971", "<|multi_cite_3_1|>": "ss-1343654", "<|multi_cite_3_2|>": "arxiv-366602", "<|multi_cite_3_3|>": "arxiv-184758", "<|multi_cite_4_1|>": "arxiv-233811", "<|multi_cite_4_2|>": "arxiv-365302", "<|multi_cite_5_1|>": "arxiv-233811", "<|multi_cite_5_2|>": "ss-1349464", "<|multi_cite_5_3|>": "arxiv-259631", "<|multi_cite_6_1|>": "arxiv-171696", "<|multi_cite_6_2|>": "arxiv-262237", "<|multi_cite_6_3|>": "arxiv-233349", "<|multi_cite_6_4|>": "arxiv-257407", "<|multi_cite_6_5|>": "arxiv-227744", "<|multi_cite_6_6|>": "arxiv-290573"} |
2310.13328 | <|paper_start|> Title: One-Phase Batch Update on Sparse Merkle Trees for Rollups
Abstract: One-Phase Batch Update on Sparse Merkle Trees for Rollups: A sparse Merkle tree is a Merkle tree with fixed height and indexed leaves given by a map from indices to leaf values. It allows for both efficient membership and non-membership proofs. It has been widely used as an authenticated data structure in various applications, such as layer-2 rollups for blockchains. zkSync Lite, a popular Ethereum layer-2 rollup solution, uses a sparse Merkle tree to represent the state of the layer-2 blockchain. The account information is recorded in the leaves of the tree. In this paper, we study the sparse Merkle tree algorithms presented in zkSync Lite, and propose an efficient batch update algorithm to calculate a new root hash given a list of account (leaf) operations. Using the construction in zkSync Lite as a benchmark, our algorithm 1) improves the account update time from $\mathcal{O}(\log n)$ to $\mathcal{O}(1)$ and 2) reduces the batch update cost by half using a one-pass traversal. Empirical analysis of real-world block data shows that our algorithm outperforms the benchmark by at most 14%.
Introduction
Recent advances in distributed ledger technology have introduced a new paradigm of applications called ``decentralisation applications" (DApps) with new use cases in areas such as finance <|cite_start|> (Reference: Decentralising finance using decentralised blockchain oracles: The recent spread of COVID-19, stressed economies and government pumping money into the market has once again ignited the discussion on the need to have decentralised economies, the role of regulatory authorities and if bitcoin represents a true store of value. In this paper, we identify the need to alternate financial structure, discuss how blockchain and cryptocurrencies play a very important role in achieving it. Blockchain applications are heavily dependent on oracles for their interaction with outside world, we have discussed here the functioning of oracles and then finally presented a broad architecture that can be used to implement a majority of financial instruments on blockchain.) <|cite_end|> <|cite_start|> (Reference: Uniswap v3 Core: Uniswap v3 is a noncustodial automated market maker implemented for the Ethereum Virtual Machine. In comparison to earlier versions of the protocol, Uniswap v3 provides increased capital efficiency and fine-tuned control to liquidity providers, improves the accuracy and convenience of the price oracle, and has a more flexible fee structure.) <|cite_end|>, logistics <|cite_start|> (Reference: Blockchain technology implementation in logistics: This paper researches decentralized data storage represented by blockchain technology and the possibility of its development in sustainable logistics and supply chain management. Although the benefits of blockchain technology have been most widely researched in the financial sector, major challenges in logistics, such as order delay, damage to goods, errors, and multiple data entry can also be minimized by introducing blockchain technology. This paper presents a comprehensive review of the current and rising trends of blockchain technology usage in logistics and supply chain management.) <|cite_end|>, and Internet-of-Things <|cite_start|> (Reference: Blockchain and IoT Integration: A Systematic Survey: The Internet of Things (IoT) refers to the interconnection of smart devices to collect data and make intelligent decisions. However, a lack of intrinsic security measures makes IoT vulnerable to privacy and security threats. With its “security by design,” Blockchain (BC) can help in addressing major security requirements in IoT. BC capabilities like immutability, transparency, auditability, data encryption and operational resilience can help solve most architectural shortcomings of IoT. This article presents a comprehensive survey on BC and IoT integration. The objective of this paper is to analyze the current research trends on the usage of BC-related approaches and technologies in an IoT context. This paper presents the following novelties, with respect to related work: (i) it covers different application domains, organizing the available literature according to this categorization, (ii) it introduces two usage patterns, i.e., device manipulation and data management (open marketplace solution), and (iii) it reports on the development level of some of the presented solutions. We also analyze the main challenges faced by the research community in the smooth integration of BC and IoT, and point out the main open issues and future research directions. Last but not least, we also present a survey about novel uses of BC in the machine economy.) <|cite_end|>. However, the increasing number of users and transactions on DApps has also exposed the key limitation of the scalability of their underlying public blockchain infrastructures <|cite_start|> (Reference: Systematic literature review of challenges in blockchain scalability: Blockchain technology is fast becoming the most transformative technology of recent times and has created hype and optimism, gaining much attention from the public and private sectors. It has been widely deployed in decentralized crypto currencies such as Bitcoin and Ethereum. Bitcoin is the success story of a public blockchain application that propelled intense research and development into blockchain technology. However, scalability remains a crucial challenge. Both Bitcoin and Ethereum are encountering low-efficiency issues with low throughput, high transaction latency, and huge energy consumption. The scalability issue in public Blockchains is hindering the provision of optimal solutions to businesses and industries. This paper presents a systematic literature review (SLR) on the public blockchain scalability issue and challenges. The scope of this SLR includes an in-depth investigation into the scalability problem of public blockchain, associated fundamental factors, and state-of-art solutions. This project managed to extract 121 primary papers from major scientific databases such as Scopus, IEEE explores, Science Direct, and Web of Science. The synthesis of these 121 articles revealed that scalability in public blockchain is not a singular term. A variety of factors are allied to it, with transaction throughput being the most discussed factor. In addition, other interdependent vita factors include storages, block size, number of nodes, energy consumption, latency, and cost. Generally, each term is somehow directly or indirectly reliant on the consensus model embraced by the blockchain nodes. It is also noticed that the contemporary available consensus models are not efficient in scalability and thus often fail to provide good QoS (throughput and latency) for practical industrial applications. Our findings exemplify that the Internet of Things (IoT) would be the leading application of blockchain in industries such as energy, finance, resource management, healthcare, education, and agriculture. These applications are, however, yet to achieve much-desired outcomes due to scalability issues. Moreover, Onchain and offchain are the two major categories of scalability solutions. Sagwit, block size expansion, sharding, and consensus mechanisms are examples of onchain solutions. Offchain, on the other hand, is a lighting network.) <|cite_end|>. Two of the largest public blockchians by market capitalisation\footnote{\url{https://coinmarketcap.com/} accessed on 23rd of August 2023.}, Bitcoin <|cite_start|> (Reference: Bitcoin: A Peer-to-Peer electronic cash system: 原文作者:中本聪(Satoshi Nakamoto) 翻译:Bitcoinblogger.com 独家赞助 作者邮箱:[email protected] www.bitcoin.org [摘要]:本文提出了一种完全通过点对点技术实现的电子现金系统,它使得在线支付 能够直接由一方发起并支付给另外一方,中间不需要通过任何的金融机构。虽然数 字签名(Digital signatures)部分解决了这个问题,但是如果仍然需要第三方的支持 才能防止双重支付(double-spending)的话,那么这种系统也就失去了存在的价值。 我们(we)在此提出一种解决方案,使现金系统在点对点的环境下运行,并防止双重支 付问题。该网络通过随机散列(hashing)对全部交易加上时间戳(timestamps), 将它们合并入一个不断延伸的基于随机散列的工作量证明(proof-of-work)的链条作 为交易记录,除非重新完成全部的工作量证明,形成的交易记录将不可更改。最长 的链条不仅将作为被观察到的事件序列(sequence)的证明,而且被看做是来自 CPU 计算能力最大的池(pool)。只要大多数的 CPU 计算能力都没有打算合作起来对全 网进行攻击,那么诚实的节点将会生成最长的、超过攻击者的链条。这个系统本身 需要的基础设施非常少。信息尽最大努力在全网传播即可,节点(nodes)可以随时离 开和重新加入网络,并将最长的工作量证明链条作为在该节点离线期间发生的交易 的证明。) <|cite_end|> and Ethereum <|cite_start|> (Reference: ETHEREUM: A Secure Decentralised Generalised Transaction Ledger: The blockchain paradigm when coupled with cryptographically-secured transactions has demonstrated its utility through a number of projects, with Bitcoin being one of the most notable ones. Each such project can be seen as a simple application on a decentralised, but singleton, compute resource. We can call this paradigm a transactional singleton machine with shared-state. Ethereum implements this paradigm in a generalised manner. Furthermore it provides a plurality of such resources, each with a distinct state and operating code but able to interact through a message-passing framework with others. We discuss its design, implementation issues, the opportunities it provides and the future hurdles we envisage.) <|cite_end|>, can only process 7 and 29 transactions per second (TPS), which is far from their centralised payment provider counterpart, Visa, which claims to have the capacity to process 65,000 TPS <|cite_start|> (Reference: Visa: 随着互联网技术和通信技术的快速发展,消费者的消费习惯和支付方式也发生了巨大的变化。Visa中国区副总经理、创新产品负责入龚亮告诉记者,整个支付行业正在经历快速的变革,必须要有更先进的平台和服务才能跟上需求变化的步伐。) <|cite_end|>.
There are many ways to improve blockchain scalability. They can be broadly grouped into two categories: on-chain and off-chain. On-chain research involves changing the underlying blockchain infrastructure to achieve better scalability. Examples of on-chain research efforts include developing efficient consensus algorithms <|cite_start|> (Reference: Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol: ) <|cite_end|> <|cite_start|> (Reference: Scalable and Probabilistic Leaderless BFT Consensus through Metastability: This paper introduces a family of leaderless Byzantine fault tolerance protocols, built around a metastable mechanism via network subsampling. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries while their concurrent and leaderless nature enables them to achieve high throughput and scalability. Unlike blockchains that rely on proof-of-work, they are quiescent and green. Unlike traditional consensus protocols where one or more nodes typically process linear bits in the number of total nodes per decision, no node processes more than logarithmic bits. It does not require accurate knowledge of all participants and exposes new possible tradeoffs and improvements in safety and liveness for building consensus protocols. The paper describes the Snow protocol family, analyzes its guarantees, and describes how it can be used to construct the core of an internet-scale electronic payment system called Avalanche, which is evaluated in a large scale deployment. Experiments demonstrate that the system can achieve high throughput (3400 tps), provide low confirmation latency (1.35 sec), and scale well compared to existing systems that deliver similar functionality. For our implementation and setup, the bottleneck of the system is in transaction verification.) <|cite_end|>, sharding <|cite_start|> (Reference: {A Secure Sharding Protocol for Open Blockchains: Cryptocurrencies, such as Bitcoin and 250 similar alt-coins, embody at their core a blockchain protocol --- a mechanism for a distributed network of computational nodes to periodically agree on a set of new transactions. Designing a secure blockchain protocol relies on an open challenge in security, that of designing a highly-scalable agreement protocol open to manipulation by byzantine or arbitrarily malicious nodes. Bitcoin's blockchain agreement protocol exhibits security, but does not scale: it processes 3--7 transactions per second at present, irrespective of the available computation capacity at hand. In this paper, we propose a new distributed agreement protocol for permission-less blockchains called ELASTICO. ELASTICO scales transaction rates almost linearly with available computation for mining: the more the computation power in the network, the higher the number of transaction blocks selected per unit time. ELASTICO is efficient in its network messages and tolerates byzantine adversaries of up to one-fourth of the total computational power. Technically, ELASTICO uniformly partitions or parallelizes the mining network (securely) into smaller committees, each of which processes a disjoint set of transactions (or "shards"). While sharding is common in non-byzantine settings, ELASTICO is the first candidate for a secure sharding protocol with presence of byzantine adversaries. Our scalability experiments on Amazon EC2 with up to $1, 600$ nodes confirm ELASTICO's theoretical scaling properties.) <|cite_end|> <|cite_start|> (Reference: RapidChain: Scaling blockchain via full sharding: A major approach to overcoming the performance and scalability limitations of current blockchain protocols is to use sharding which is to split the overheads of processing transactions among multiple, smaller groups of nodes. These groups work in parallel to maximize performance while requiring significantly smaller communication, computation, and storage per node, allowing the system to scale to large networks. However, existing sharding-based blockchain protocols still require a linear amount of communication (in the number of participants) per transaction, and hence, attain only partially the potential benefits of sharding. We show that this introduces a major bottleneck to the throughput and latency of these protocols. Aside from the limited scalability, these protocols achieve weak security guarantees due to either a small fault resiliency (e.g., 1/8 and 1/4) or high failure probability, or they rely on strong assumptions (e.g., trusted setup) that limit their applicability to mainstream payment systems. We propose RapidChain, the first sharding-based public blockchain protocol that is resilient to Byzantine faults from up to a 1/3 fraction of its participants, and achieves complete sharding of the communication, computation, and storage overhead of processing transactions without assuming any trusted setup. RapidChain employs an optimal intra-committee consensus algorithm that can achieve very high throughputs via block pipelining, a novel gossiping protocol for large blocks, and a provably-secure reconfiguration mechanism to ensure robustness. Using an efficient cross-shard transaction verification technique, our protocol avoids gossiping transactions to the entire network. Our empirical evaluations suggest that RapidChain can process (and confirm) more than 7,300 tx/sec with an expected confirmation latency of roughly 8.7 seconds in a network of 4,000 nodes with an overwhelming time-to-failure of more than 4,500 years.) <|cite_end|>, and changing block configurations <|cite_start|> (Reference: The Limits to Blockchain? Scaling vs. Decentralization: This discussion paper examines a possible limitation to the advancement of blockchain: the intrinsic tradeoff between scaling to a larger size, and the need to maintain a decentralized and distributed architecture. The scaling vs. decentralization trade-off, the paper argues, may impose a long-term limitation on the growth of blockchain-based technologies including cryptocurrencies.) <|cite_end|>. On the other hand, off-chain research efforts involve changing how we interact with the blockchain (L1). Instead of performing all activities on-chain, we offload the computation- and storage-intensive activities off-chain. Some existing solutions include State Channels <|cite_start|> (Reference: Blockchain state channels: A state of the art: Blockchain technology has been quite popular during recent years and it finally seems to present a significant rise with respect to its use for real-world applications. This advancement has brought up a critical challenge that public blockchain systems face, which is scalability. Most of the currently deployed systems fail to cope with increasing usage. In order to provide the promised security guarantees, large delays and high usage fees are imposed for submitted transactions and thus widespread adoption of the technology is hindered. A number of different approaches have been proposed to increase the capacity of blockchain systems with respect to processing transactions. The present survey focuses on one of the most popular ones, that of state channels, and to the extent of our knowledge constitutes the first collective survey of research in this field. An extensive analysis of relevant publications is conducted and a general view on the domain is provided. We have identified the limitations discussed through all relevant research efforts along with the various features that differentiate proposed designs. A comparison between retrieved papers is carried out on the basis of those limitations and features. Finally, future research directions are analysed while the role of state channels in the general public blockchain ecosystem is also discussed.) <|cite_end|> , Plasma <|cite_start|> (Reference: {Plasma: Scalable Autonomous Smart Contracts: Plasma is a proposed framework for incentivized and enforced execution of smart contracts which is scalable to a significant amount of state updates per second (potentially billions) enabling the blockchain to be able to represent a significant amount of decentralized financial applications worldwide. These smart contracts are incentivized to continue operation autonomously via network transaction fees, which is ultimately reliant upon the underlying blockchain (e.g. Ethereum) to enforce transactional state transitions. We propose a method for decentralized autonomous applications to scale to process not only financial activity, but also construct economic incentives for globally persistent data services, which may produce an alternative to centralized server farms. Plasma is composed of two key parts of the design: Reframing all blockchain computation into a set of MapReduce functions, and an optional method to do Proof-of-Stake token bonding on top of existing blockchains with the understanding that the Nakamoto Consensus incentives discourage block withholding. This construction is achieved by composing smart contracts on the main blockchain using fraud proofs whereby state transitions can be enforced on a parent blockchain. We compose blockchains into a tree hierarchy, and treat each as an individual branch blockchain with enforced blockchain history and MapReducible computation committed into merkle proofs. By framing one’s ledger entry into a child blockchain which is enforced by the parent chain, one can enable incredible scale with minimized trust (presuming root blockchain availability and correctness). The greatest complexity around global enforcement of non-global data revolves around data availability and block withholding attacks, Plasma has mitigations for this issue by allowing for exiting faulty chains while also creating mechanisms to incentivize and enforce continued correct execution of data. As only merkleized commitments are broadcast periodically to the root blockchain (i.e. Ethereum) during non-faulty states, this can allow for incredibly scalable, low cost transactions and computation. Plasma enables persistently operating decentralized applications at high scale.) <|cite_end|>, and rollups. These scaling solutions are known as ``Layer-2" (L2) solutions.
The recent developments of L2 rollups such as zkSync Lite, Aztec Network, Loopering, and Immutable X has shown prominent results toward increasing transaction throughput on Ethereum. Rollups execute transactions off-chain and bundle the results of many L2 transactions into one L1 transaction. L1 cannot interpret L2 data, it only acts as a \textit{data availability layer} for L2 activity. Such techniques provide a reduction in computation to L1, while also massively decreasing the transaction fees as one L1 transaction fee is shared amongst all transactions bundled within it.
zkSync Lite, a widely used and well-documented zero-knowledge rollup technique, has achieved a maximum observed TPS of 110, making it almost 6 times faster than Ethereum. Following the success of rollups, Ethereum has introduced a rollup-centric roadmap specifically directing future scaling efforts on Ethereum to maximise the use of L2 rollups.
In an L2 rollup, there are generally \textit{operators} keeping the L2 state, processing L2 transactions and communicating with L1 through a smart contract. \textit{Users} have \textit{accounts} and \textit{balances} of tokens. L2 users submit signed transactions to the operators, who then collect those transactions and form L2 blocks.
Sparse Merkle trees (SMT) are widely used as authenticated data structures to keep state information in rollups because of their simplicity and effectiveness. The leaves of SMTs represent account-related information, such as balances and nonce. The root hash of SMTs is a succinct representation of the state of all account balances. Given a block of L2 transactions, the operators will calculate a new root hash based on the result of these transactions. Generally, the process of finding the root hash involves two parts: first, the account leaves need to be updated. Then, the new root hash is calculated by updating the paths from the updated leaves to the root.
The current implementation of this in zkSync Lite is to first go through the transactions in a block sequentially to update the leaves individually and then calculate the root hash. This solution involves traversing the SMTs twice for every updated leaf, which is inefficient. We denote this as a two-phase algorithm.
To build on the above solution, this paper introduces the notion of\\ \texttt{BatchUpdate} on SMTs. The action of \textit{batching} is defined as processing transactions in a block all at once instead of individually. All accounts involved in transactions in a block are updated together in a batch. Instead of traversing the SMTs twice, we propose a new algorithm to update the leaves and intermediate hashes at the same time by traversing the SMTs only once. We name this approach the one-phase batch update (OBU).
\subsubsection{Our Contributions.}
\begin{enumerate}
\item We introduce an efficient SMT leaf update algorithm, \texttt{SMT.UpdateLeaf}, that improves account update time from $\mathcal{O}(\log n)$ to $\mathcal{O}(1)$.
\item Building on this, an SMT batch update algorithm, \texttt{SMT.BatchUpate}, is proposed to calculate the root hash of an SMT, reducing the total number of traversals by 50\% from $\mathcal{O}(k\log n) + \mathcal{O}(k\log n) H$ to $\mathcal{O}(k\log n)H$, where $k$ is the number of updates in a batch, $n$ is the total number of leaves in the SMT, and $H$ is a hash operation.\footnote{Code at: \url{https://github.com/Boqian-Ma/one-phase-batch-update-SMT}}
\item Performance analysis of our proposed algorithm was conducted using both micro- and macro-benchmarks in single and multi-threaded scenarios.
\item In real-world macro-benchmark data, our algorithm outperformed the benchmark by up to 14\%.
\end{enumerate}
\subsubsection{Organisation.}
The rest of the paper is organised as follows. Section \ref{sec:preliminary} introduces the preliminary information. Next, Section \ref{sec:related_work} discusses some related work. In Section \ref{sec:obu}, we introduce the batch update algorithm. Section \ref{sec:evaluation} outlines our experimental results, followed by the conclusion and discussion in Section \ref{sec:conclusion}.
Related Work
\label{sec:related_work}
This section introduces zkSync Lite and its relevant SMT root hash update algorithm, which we use as our benchmark.
\subsubsection{zkSync Lite}
zkSync Lite is an L2 rollup solution developed by Matter Labs <|cite_start|> (Reference: Simulation Suggests Black Holes May Make Ideal Dark Matter Labs: A new NASA computer simulation shows that dark matter particles colliding in the extreme gravity of a black hole can produce strong, potentially observable gamma-ray light.) <|cite_end|>. It supports simple transaction types including transfer or swap of ERC-20 <|cite_start|> (Reference: The role of the ERC-20 token standard in a financial revolution: the case of Initial Coin Offerings: —The year of 2017 saw a surge of interest in a curious new way to raise capital: the ‘Initial Coin Offering’ (ICO). In this style of crowdfunding, investors exchange a general- purpose cryptocurrency, such as Bitcoin (BTC) or Ethereum (ETH), for a branded and special-use blockchain token, which somehow underpins or represents the project that is being funded. Billions of dollars worth of capital has already been raised through these schemes, and these huge transfers have motivated diverse, and often skeptical, commentary. However, the technical standards which underpin these schemes have often passed unremarked, even though they were, arguably, the catalyst for the huge growth witnessed in this sector. The present work will elucidate how an informal standard for smart contracts on the Ethereum blockchain, labelled ERC-20, has enabled the wide proliferation of special purpose fundraising tokens. Crucially, the ERC-20 standard allows frictionless interoperability, so that any compliant Ethereum wallet software can transact and monitor the full range of compliant tokens. In this way, the short and rather dry ERC-20 document allows any Ethereum wallet to control a diverse portfolio of token investments, and it is precisely this interoperability that has permitted the vast capital transfers of the ICO era. The present work will briefly survey the short history of this technology and will discuss some implications that this disruption presents for technical standardisation bodies.) <|cite_end|> tokens, and ERC-721 token minting. Like most L2 solutions, zkSync Lite has two main components: on-chain and off-chain. The on-chain component includes several Solidity Smart Contracts deployed \footnote{\url{https://etherscan.io/address/0xaBEA9132b05A70803a4E85094fD0e1800777fBEF}} on Ethereum L1. The off-chain component includes several micro-services that facilitate L2 transaction executions and SNARK <|cite_start|> (Reference: Plonk: Permutations over lagrange-bases for oecumenical
noninteractive arguments of knowledge: zk-SNARK constructions that utilize an updatable universal structured reference string remove one of the main obstacles in deploying zk-SNARKs[GKM + ]. The important work of Maller et al. [MBKM19] presented Sonic - the first potentially practical zk-SNARK with fully succinct verification for general arithmetic circuits with such an SRS. However, the version of Sonic enabling fully succinct verification still requires relatively high proof construction overheads. We present a universal SNARK construction with fully succinct verification, and significantly lower prover running time (roughly 7.5-20 times fewer group exponentiations than [MBKM19] in the fully succinct verifier mode depending on circuit structure). Similarly to [MBKM19] we rely on a permutation argument based on Bayer and Groth [BG12]. However, we focus on “Evaluations on a subgroup rather than coefficients of monomials”; which enables simplifying both the permutation argument and the arithmetization step.) <|cite_end|> generation. Detailed descriptions of the zkSync Lite design are given in the Appendix \ref{appx:zksync_lite_design}
\subsubsection{Account Tree Construction.}
SMTs are used in three places in zkSync Lite: account tree \footnote{ \url{https://github.com/matter-labs/zksync/blob/master/core/lib/types/src/lib.rs\#L84}}
, circuit account tree, and balance tree. The account tree is the main data structure that keeps track of the account balances of its users. The circuit account tree and the balance tree are derived from the account tree and are used to build zero-knowledge block proofs. Here, we give descriptions of the account tree in zkSync Lite.
The account tree is an SMT of depth $N=24$. As such, it can store up to $2^{25}-1$ accounts. The accounts are stored in a map $M$, mapping from leaf indices to accounts. Each internal node, $\mathrm{node}_j$, where $1 \leq j \leq 2^{N+1}$, $\mathrm{node}_j$'s direct children are $\mathrm{node}_j$'s children are given by $\mathrm{node}_{2j}$ and $\mathrm{node}_{2j+1}$ and $\mathrm{node}_j = H(\mathrm{node}_{2j} \| \mathrm{node}_{2j+1})$. $\mathrm{node}_j$ is also known as $\mathrm{node}_{2j}$ and $\mathrm{node}_{2j+1}$'s parent node. The root of the tree is $\mathrm{node}_1$, which also corresponds to the digest of $T$.
Each leaf node $\mathrm{leaf}_k$ where $0 \leq k < 2^{N}$ , corresponds to a key $k$ and is labelled with the value associated with that key if it exists or the hash of a default value otherwise. Formally, if $v = M(k)$ exists, $\mathrm{leaf}_k = v$, else $\mathrm{leaf}_k = \textit{default}$, where \textit{default} is a predefined default value.
On the $N^{\mathrm{th}}$ level of $SMT$ (i.e. the leaf level), given by the set of $2^N$ nodes $\{\mathrm{node}_q\}^{2^N}$ where $2^N \leq q < 2^{N+1}$, each $\mathrm{node}_j$ corresponds to a key $k = (1 << N) + q$ and is labelled with the hash of the value associated with that key if it exists, or the hash of a default value otherwise. Formally, if $v = M(k)$ exists, $\mathrm{node}_q = H(v)$, otherwise $\mathrm{node}_q = H(\textit{default})$.
For simplicity, we denote the nodes at the leaf level by $L = \{\mathrm{leaf}_0, \cdots, \mathrm{leaf}_k\}$ where $0 \leq k < 2^{N}$.
\subsubsection{Root Hash Update Algorithm.}
Here we outline the root hash update algorithm implemented in zkSync Lite given a list of leaf operations. This algorithm is divided into two phases. Consider an account tree $T$ and a list of $k$ operations $O = \{o^{j}\}^k_{j\in[0, 2^{N})}$. The first phase updates the leaves to their new values. For each operation $o^{j} \in O$, the algorithm traverses $T$ from the root to $\mathrm{leaf}_j$ and performs the operation. For example, if $o^j$ was an \texttt{update} balance operation, then the balance of $\mathrm{leaf}_j$ is updated accordingly. At the end of this phase, all accounts affected by $O$ are updated. Note that when a leaf is updated to a new value, all nodes in its parent path need to be recomputed. This phase does not concern the hash calculation and takes $\mathcal{O}(k\log n)$ running time to perform $k$ updates.
The second phase re-computes the hashes of affected paths and returns the new root hash. To compute the root hash, the algorithm traverses left and right recursively from $T$'s root to retrieve or compute the child hashes. Recursion terminates when 1) an updated leaf is reached or 2) when all the child leaves of the current nodes are unchanged from the first phase. In case 1), the leaf hash is calculated and returned. In case 2), the current node hash is returned. As a result of this recursive algorithm, the new root hash is calculated. This phase takes $\mathcal{O}(k\log n)H$ running time, where $H$ is the running time of the chosen hash function. Together, the root hash calculation process takes $\mathcal{O}(k\log n) + \mathcal{O}(k\log n)H$.
The first phase occurs in the block producer module, while the second phase occurs in the root hash calculator module. In the actual implementation, these two phases are completed in two separate micro-services. The first phase occurs in the ``block producer" module, where the leaves are updates. Then, the second phase happens in the ``root hash calculator" module, where the new root hash is computed. This separation takes the hash calculation computation overhead away from the main service.
\subsubsection{Inefficiencies.} \label{sec:inefficiency}
Above we described a two-phase algorithm implemented in zkSync Lite to update the root state of the account tree given a list of $k$ leaf operations. As stated in the zkSync Lite code base \footnote{\url{https://github.com/matter-labs/zksync/blob/master/core/bin/zksync_core/src/state_keeper/root_hash_calculator/mod.rs\#L21}}, there exists a bottleneck that constrains the speed of the block producer producing blocks. If the block producer's speed exceeds the speed of root hash calculation, then the job queue for the root hash calculator will increase indefinitely. Furthermore, we observe that for each operation $o^{j}\in O$, the path between the updated $\mathrm{leaf}_j$ and root is traversed twice. The first traversal occurs when updating the account values and the second time occurs when calculating the root hash. <|paper_end|> | [
"<|reference_start|> Decentralising finance using decentralised blockchain oracles: The recent spread of COVID-19, stressed economies and government pumping money into the market has once again ignited the discussion on the need to have decentralised economies, the role of regulatory authorities and if bitcoin represents a true store of value. In this paper, we identify the need to alternate financial structure, discuss how blockchain and cryptocurrencies play a very important role in achieving it. Blockchain applications are heavily dependent on oracles for their interaction with outside world, we have discussed here the functioning of oracles and then finally presented a broad architecture that can be used to implement a majority of financial instruments on blockchain. <|reference_end|>",
"<|reference_start|> Visa: 随着互联网技术和通信技术的快速发展,消费者的消费习惯和支付方式也发生了巨大的变化。Visa中国区副总经理、创新产品负责入龚亮告诉记者,整个支付行业正在经历快速的变革,必须要有更先进的平台和服务才能跟上需求变化的步伐。 <|reference_end|>",
"<|reference_start|> Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol: <|reference_end|>",
"<|reference_start|> Plonk: Permutations over lagrange-bases for oecumenical\nnoninteractive arguments of knowledge: zk-SNARK constructions that utilize an updatable universal structured reference string remove one of the main obstacles in deploying zk-SNARKs[GKM + ]. The important work of Maller et al. [MBKM19] presented Sonic - the first potentially practical zk-SNARK with fully succinct verification for general arithmetic circuits with such an SRS. However, the version of Sonic enabling fully succinct verification still requires relatively high proof construction overheads. We present a universal SNARK construction with fully succinct verification, and significantly lower prover running time (roughly 7.5-20 times fewer group exponentiations than [MBKM19] in the fully succinct verifier mode depending on circuit structure). Similarly to [MBKM19] we rely on a permutation argument based on Bayer and Groth [BG12]. However, we focus on “Evaluations on a subgroup rather than coefficients of monomials”; which enables simplifying both the permutation argument and the arithmetization step. <|reference_end|>"
] | [
0,
7,
8,
17
] | {"<|cite_1|>": "ss-2417091", "<|cite_2|>": "ss-727453", "<|cite_3|>": "ss-1300246", "<|cite_4|>": "ss-946595", "<|cite_5|>": "ss-2314447", "<|cite_6|>": "ss-846312", "<|cite_7|>": "ss-779200", "<|cite_8|>": "ss-944817", "<|cite_9|>": "ss-978660", "<|cite_10|>": "arxiv-210813", "<|cite_11|>": "ss-1230256", "<|cite_12|>": "ss-1123611", "<|cite_13|>": "ss-828834", "<|cite_14|>": "ss-2479571", "<|cite_15|>": "ss-1170901", "<|cite_26|>": "ss-828835", "<|cite_27|>": "ss-727450", "<|cite_29|>": "ss-874017"} |
1808.06958 | <|paper_start|> Title: Greedy Harmony Search Algorithm for the Hop Constrained Connected Facility Location
Abstract: Greedy Harmony Search Algorithm for the Hop Constrained Connected Facility Location: We present a simple, robust and efficient harmony search algorithm for the Hop Constrained Connected Facility Location problem (HCConFL). The HCConFL problem is NP-hard that models the design of data-management and telecommunication networks in a manner of reliability. In this paper, we customize harmony search algorithm to solve the HCConFL problem. To arrive to quick, optimal cost of each solution, we use a new greedy approach expanding idea of Kruskal algorithm in our objective function. We also use a new greedy method combined with harmony search to obtain a good approximation in an efficient computational time. The algorithm was evaluated on the standard OR Library benchmarks. Computational results show that with high frequencies the modified harmony search algorithm produces optimal solutions to all benchmarks very quickly. We also solve the problem with another heuristic algorithm including the variable neighborhood search, the tabu search, to evaluate our algorithm.
Introduction
\label{}
Due to recent growth of telecommunication networks, telecommunication companies have motivated researchers to solutions for network design problems. Such networks are designed to connect a source by intermediate switching devices to subscribers as a network. The intermediate switching devices installed in these networks are so expensive.
Besides, in the context of reliability, Hob Constraint is used as a limit for the number of intermediate devices used between the source and subscribers. The aim of this paper is to minimize the cost of such networks. Similar problems arise in the design of the communication networks. <|cite_start|> (Reference: MIP models for connected facility location: A theoretical and computational study☆: ) <|cite_end|> have shown that the Fiber-to-the-Curb strategy can be modeled by the connected facility location (ConFL). They have modeled these reliability constraints within the Fiber-to-the-Curb strategy by generalizing the ConFL to the HCConFL.
The HCConFL problem (Figure 1) is related to two well-known problems: The Facility Location problem and the Steiner tree problem with hop constraints.
The ConFL problem is HCConFL problem when the hop is infinitive. In ConFL an undirected graph $G=(V, E)$ is given with a dedicated root node $v_0\in V$ and edge costs $c_e\geq 0, \forall e=(u,v)$, Corresponding to the costs of installing a new route between $u$ and $v$.
Furthermore, a set of facilities $f\subseteq V$ and customer nodes $D\subseteq V$ are given, and also an opening cost $f_i\geq 0$ is assigned to each facility. We try to find a minimum cost tree so that every customer node is assigned to an open facility and also open facilities are connected to the route through a Steiner tree. first introduced the ConFL. These researchers obtained first approximation algorithm for this problem. Currently, many research groups have focused on the optimization of the ConFL problem and few heuristic methods are suggested to practical problems. <|cite_start|> (Reference: A Hybrid VNS for Connected Facility Location: ) <|cite_end|> proposed heuristic algorithm for the first time in 2007 by combining tabu search and Neighborhood search.
Tomazic and Ljubic in 2008 considered the problem without the root and gave the greedy randomized adaptive search procedure in <|cite_start|> (Reference: A GRASP algorithm for the connected facility location problem: We apply a greedy randomized adaptive search procedure (GRASP) to solve the connected facility location problem heuristically. Diversification property is assured by applying a randomized greedy algorithm to construct feasible solutions in a multi-start fashion. Intensification elements are guaranteed due to two facility-based local search techniques. The computational study is conducted on a parameterized set of randomly generated benchmark instances. The obtained results reflect the quality of the proposed approach with respect to both, the quality of solutions and the computational effort, by comparison with lower bounds obtained from a branch-and-cut framework.) <|cite_end|>. In 2010, Bardossy and Raghavan gave an algorithm by combining dual ascent approach and neighboring local search to get upper bound and lower bound for the problem <|cite_start|> (Reference: Dual-Based local search for the Connected facility location and related problems: The connected facility location (ConFL) problem arises in a number of applications that relate to the design of telecommunication networks as well as data distribution and management problems on networks. It combines features of the uncapacitated facility location problem with the Steiner tree problem and is known to be NP-complete. In this setting, we wish to install a set of facilities on a communication network and assign customers to the installed facilities. In addition, the set of selected facilities needs to be connected by a Steiner tree. In this paper, we propose a dual-based local search heuristic that combines dual ascent and local search, which together yield strong lower and upper bounds to the optimal solution. Our procedure is applied to a slightly more general version of the ConFL problem that embraces a family of four different problems---the Steiner tree-star problem, the general Steiner tree-star problem, the ConFL problem, and the rent-or-buy problem---that combine facility location decisions with connectivity requirements. Consequently, our solution methodology successfully applies to all of them. We discuss a wide range of computational experiments that indicate that our heuristic is a very effective procedure that finds high-quality solutions very rapidly.) <|cite_end|>.
Hop Constrained Steiner Tree problem (HCST): Given an undirected connected graph G=(V, E) and nonnegative weights associated with the edges. Consider a set of essential nodes, a root node, some other non-essential nodes, and also a positive integer $h\leq H$.
The problem is to find a minimum cost subgraph $T$ of $G$ so that from root to each essential node exists a path $T$ from $v_0\in V$ to each basic node with no more than $H$ intermediate edges (eventually including nodes from $S=(V, Q))$ in <|cite_start|> (Reference: A distributed dual ascent algorithm for the Hop-constrained Steiner Tree Problem: ) <|cite_end|>.
HCST and Hop Constrained Minimum Spanning tree (HCMST) problems are are very practical in telecommunication network design and network requirements. A recent survey for the HCMST can be found in <|cite_start|> (Reference: On Formulations and Methods for the Hop-Constrained Minimum Spanning Tree Problem: ) <|cite_end|>. Gouveia uses variable redefinition to strengthen a multicommodity flow model for minimum spanning and Steiner trees with hop constraints between a root node and any other node <|cite_start|> (Reference: Using Variable Redefinition for Computing Lower Bounds for Minimum Spanning and Steiner Trees with Hop Constraints: We use variable redefinition (see R. MARTIN, 1987. Generating Alternative Mixed-Integer Programming Models Using Variable Redefinition, Operations Research 35, 820Â 831) to strengthen a multicommodity flow (MCF) model for minimum spanning and Steiner trees with hop constraints between a root node and any other node. Hop constraints model quality of service constraints. The Lagrangean dual value associated with one Lagrangean relaxation derived from the MCF formulation dominates the corresponding LP value. However, the lower bounds given after a reasonable number of iterations of the associated subgradient optimization procedure are, for several cases, still far from the theoretical best limit. Martin's variable redefinition technique is used to obtain a generalization of the MCF formulation whose LP bound is equal to the previously mentioned Lagrangean dual bound. We use a set of instances with up to 100 nodes, 50 basic nodes, and 350 edges for comparing an LP approach based on solving the LP relaxation of the new model with the equivalent Lagrangean scheme derived from MCF.) <|cite_end|>.
Gouveia in the paper <|cite_start|> (Reference: Multicommodity flow models for spanning trees with hop constraints: ) <|cite_end|> compares the model of multicommodity flow in both directional and non-directional introduced HCST issue in 1998. Then in 1999, Voss presents a mix integer-programming formulation based on Miller-Tucker-Zemlin sub tour elimination constraints and also develops a heuristic algorithm to find the initial solution based on tabu search <|cite_start|> (Reference: The Steiner tree problem with hop constraints: ) <|cite_end|>. <|cite_start|> (Reference: Using the Miller-Tucker-Zemlin constraints to formulate a minimal spanning tree problem with hop constraints: ) <|cite_end|> proposes a model for HCMST problem based on Miller-Tucker-Zemlin subtour and <|cite_start|> (Reference: MIP models for connected facility location: A theoretical and computational study☆: ) <|cite_end|> presents two models based on flow and tree with hop index for the HCMST and HCST in <|cite_start|> (Reference: Using Hop-Indexed Models For Constrained Spanning and Steiner Tree Models: ) <|cite_end|>. Santos describes algorithm for the HCST problem in 2010 <|cite_start|> (Reference: A distributed dual ascent algorithm for the Hop-constrained Steiner Tree Problem: ) <|cite_end|>. In this method, with changing the original graph $G$ into the problem HCST to a layering graph of $G'$, HCST has become change to the Steiner tree and then, the dual ascent algorithm is presented to the Steiner tree problem on a graph $G'$. <|cite_start|> (Reference: On the hop-constrained survivable network design problem with reliable edges: ) <|cite_end|> study the hop-constrained survivable network design problem with reliable edges and consider two variants of reliable edges when a static problem where the reliability of edges is given, and an upgrading problem where edges can be upgraded to the reliable status at a given cost. <|cite_start|> (Reference: Integer programming formulations for the k-edge-connected 3-hop-constrained network design problem: In this article, we study the k‐edge‐connected L‐hop‐constrained network design problem. Given a weighted graph G = ( V , E ) , a set D of pairs of nodes, two integers L ≥ 2 and k ≥ 2 , the problem consists in finding a minimum weight subgraph of G containing at least k edge‐disjoint paths of length at most L between every pair { s , t } ∈ D . We consider the problem in the case where L = 2, 3 and | D | ≥ 2 . We first discuss integer programming formulations introduced in the literature. Then, we introduce new integer programming formulations for the problem that are based on the transformation of the initial undirected graph into directed layered graphs. We present a theoretical comparison of these formulations in terms of LP‐bound. Finally, these formulations are tested using CPLEX and compared in a computational study for k = 3, 4, 5. © 2015 Wiley Periodicals, Inc. NETWORKS, 67(2), 148–169 2016) <|cite_end|> provide integer linear programming formulation for hop constrained network from polyhedral point of view.
Harmony search is a meta-heuristic algorithm, which is inspired by a compositor to compose a piece of music. The harmony search algorithm has been used mostly to solve optimization problems and here we want to utilize this simple and efficient algorithm for a discrete problem (see).
\begin{figure}[h]
\centering
\includegraphics[scale=0.30]{HCFL.png}
\caption{1-Constrained Connected Facility Location Example }
\label{figNNfdsds}
\end{figure}
In this paper, we customize harmony search for solving the HCConFL problem, then improving it by combining with a new greedy approach. The paper is organized as follows. In Section 2, the problem is defined. In Section 3, harmony search algorithm is introduced. The customized algorithm details are mentioned in Section 4. The greedy approach for harmony search (modified harmony search) is presented in Section 5. In Section 6 we combine the greedy harmony search with local search. Section 7 is devoted to show the implementation and results and Section 8 provides concluding remarks. <|paper_end|> | [
"<|reference_start|> A distributed dual ascent algorithm for the Hop-constrained Steiner Tree Problem: <|reference_end|>",
"<|reference_start|> The Steiner tree problem with hop constraints: <|reference_end|>",
"<|reference_start|> MIP models for connected facility location: A theoretical and computational study☆: <|reference_end|>",
"<|reference_start|> A distributed dual ascent algorithm for the Hop-constrained Steiner Tree Problem: <|reference_end|>"
] | [
4,
8,
10,
12
] | {"<|cite_1|>": "ss-771120", "<|cite_3|>": "ss-859865", "<|cite_4|>": "ss-859866", "<|cite_5|>": "ss-859867", "<|cite_6|>": "ss-859868", "<|cite_7|>": "ss-859869", "<|cite_8|>": "ss-859870", "<|cite_9|>": "ss-1313818", "<|cite_10|>": "ss-866265", "<|cite_11|>": "ss-1089340", "<|cite_12|>": "ss-771120", "<|cite_13|>": "ss-859871", "<|cite_14|>": "ss-859868", "<|cite_15|>": "ss-1313822", "<|cite_16|>": "ss-1313823"} |
2209.06359 | <|paper_start|> Title: Federated Pruning: Improving Neural Network Efficiency with Federated Learning
Abstract: Federated Pruning: Improving Neural Network Efficiency with Federated Learning: Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns. Federated learning has been widely used and is considered to be an effective decentralized technique by collaboratively learning a shared prediction model while keeping the data local on different clients devices. However, the limited computation and communication resources on clients devices present practical difficulties for large models. To overcome such challenges, we propose Federated Pruning to train a reduced model under the federated setting, while maintaining similar performance compared to the full model. Moreover, the vast amount of clients data can also be leveraged to improve the pruning results compared to centralized training. We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
Introduction
Neural network models have wide application in a variety of tasks, such as speech recognition, machine translation, and image recognition <|cite_start|> (Reference: On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition: Recently, there has been a strong push to transition from hybrid models to end-to-end (E2E) models for automatic speech recognition. Currently, there are three promising E2E methods: recurrent neural network transducer (RNN-T), RNN attention-based encoder-decoder (AED), and Transformer-AED. In this study, we conduct an empirical comparison of RNN-T, RNN-AED, and Transformer-AED models, in both non-streaming and streaming modes. We use 65 thousand hours of Microsoft anonymized training data to train these models. As E2E models are more data hungry, it is better to compare their effectiveness with large amount of training data. To the best of our knowledge, no such comprehensive study has been conducted yet. We show that although AED models are stronger than RNN-T in the non-streaming mode, RNN-T is very competitive in streaming mode if its encoder can be properly initialized. Among all three E2E models, transformer-AED achieved the best accuracy in both streaming and non-streaming mode. We show that both streaming RNN-T and transformer-AED models can obtain better accuracy than a highly-optimized hybrid model.) <|cite_end|> <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|> <|cite_start|> (Reference: Learned in Translation: Contextualized Word Vectors: Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.) <|cite_end|>. The performance of trained models largely depends on the quality and the amount of training data. Federated learning (FL) <|cite_start|> (Reference: Advances and Open Problems in Federated Learning: Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.) <|cite_end|> provides a framework for leveraging the abundant data on edge devices with privacy preserved. However, FL faces several limitations in practice. One limitation is that the available memory on edge devices is highly limited. However, recent models are typically large, which makes on-device training challenging. For example, the successful model architecture for Automatic Speech Recognition (ASR), Conformer <|cite_start|> (Reference: Conformer: Convolution-augmented Transformer for Speech Recognition: Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters.) <|cite_end|>, has 130M parameters and requires 520MB memory solely for storing the parameters during training. Another limitation is that FL typically only updates the model parameters and leaves the model architecture unchanged. As a result, only model accuracy is improved but not model efficiency.
In this paper, we propose \emph{Federated Pruning (FP)} to address the limitations mentioned above. Because models are usually over-parameterized to facilitate training, there are many redundancies. Several methods have been explored to exploit such redundancies to improve model efficiency. Among them, pruning is one of the most successful methods and has been widely studied under centralized training settings. At a high level, it identifies and removes redundant parameters from an over-parameterized model. The proposed FP applies the same idea to improve efficiency of federated learning and also allows leveraging on-device data to potentially achieve better efficiency than centralized pruning.
The pruning method has been extensively studied in centralized fashion <|cite_start|> (Reference: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.) <|cite_end|> <|cite_start|> (Reference: Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning: Deep convolutional neural networks (CNNs) are indispensable to state-of-the-art computer vision algorithms. However, they are still rarely deployed on battery-powered mobile devices, such as smartphones and wearable gadgets, where vision algorithms can enable many revolutionary real-world applications. The key limiting factor is the high energy consumption of CNN processing due to its high computational complexity. While there are many previous efforts that try to reduce the CNN model size or amount of computation, we find that they do not necessarily result in lower energy consumption, and therefore do not serve as a good metric for energy cost estimation. To close the gap between CNN design and energy consumption optimization, we propose an energy-aware pruning algorithm for CNNs that directly uses energy consumption estimation of a CNN to guide the pruning process. The energy estimation methodology uses parameters extrapolated from actual hardware measurements that target realistic battery-powered system setups. The proposed layer-by-layer pruning algorithm also prunes more aggressively than previously proposed pruning methods by minimizing the error in output feature maps instead of filter weights. For each layer, the weights are first pruned and then locally fine-tuned with a closed-form least-square solution to quickly restore the accuracy. After all layers are pruned, the entire network is further globally fine-tuned using back-propagation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss. Finally, we show that pruning the AlexNet with a reduced number of target classes can greatly decrease the number of weights but the energy reduction is limited. Energy modeling tool and energy-aware pruned models available at http://eyeriss.mit.edu/energy.html) <|cite_end|>. <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> shows that a well-initialized sub-network can match the accuracy of the full network and such sub-network was studed in centralized training <|cite_start|> (Reference: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.) <|cite_end|> <|cite_start|> (Reference: Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning: Deep convolutional neural networks (CNNs) are indispensable to state-of-the-art computer vision algorithms. However, they are still rarely deployed on battery-powered mobile devices, such as smartphones and wearable gadgets, where vision algorithms can enable many revolutionary real-world applications. The key limiting factor is the high energy consumption of CNN processing due to its high computational complexity. While there are many previous efforts that try to reduce the CNN model size or amount of computation, we find that they do not necessarily result in lower energy consumption, and therefore do not serve as a good metric for energy cost estimation. To close the gap between CNN design and energy consumption optimization, we propose an energy-aware pruning algorithm for CNNs that directly uses energy consumption estimation of a CNN to guide the pruning process. The energy estimation methodology uses parameters extrapolated from actual hardware measurements that target realistic battery-powered system setups. The proposed layer-by-layer pruning algorithm also prunes more aggressively than previously proposed pruning methods by minimizing the error in output feature maps instead of filter weights. For each layer, the weights are first pruned and then locally fine-tuned with a closed-form least-square solution to quickly restore the accuracy. After all layers are pruned, the entire network is further globally fine-tuned using back-propagation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss. Finally, we show that pruning the AlexNet with a reduced number of target classes can greatly decrease the number of weights but the energy reduction is limited. Energy modeling tool and energy-aware pruned models available at http://eyeriss.mit.edu/energy.html) <|cite_end|>. The main issue of this approach is that the parameters deemed unimportant and pruned at an early iteration may turn out to be important at a later iteration. To address this problem, <|cite_start|> (Reference: Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science: Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erd\H{o}s-R\'enyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.) <|cite_end|> <|cite_start|> (Reference: Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization: Modern deep neural networks are typically highly overparameterized. Pruning techniques are able to remove a significant fraction of network parameters with little loss in accuracy. Recently, techniques based on dynamic reallocation of non-zero parameters have emerged, allowing direct training of sparse networks without having to pre-train a large dense model. Here we present a novel dynamic sparse reparameterization method that addresses the limitations of previous techniques such as high computational cost and the need for manual configuration of the number of free parameters allocated to each layer. We evaluate the performance of dynamic reallocation methods in training deep convolutional networks and show that our method outperforms previous static and dynamic reparameterization methods, yielding the best accuracy for a fixed parameter budget, on par with accuracies obtained by iteratively pruning a pre-trained dense model. We further investigated the mechanisms underlying the superior generalization performance of the resultant sparse networks. We found that neither the structure, nor the initialization of the non-zero parameters were sufficient to explain the superior performance. Rather, effective learning crucially depended on the continuous exploration of the sparse network structure space during training. Our work suggests that exploring structural degrees of freedom during training is more effective than adding extra parameters to the network.) <|cite_end|> use different pruning method time-wise and model-wise.
Unlike these works focusing on centralized training, our work targets at federated learning and analyzes the impact of different pruning design decisions under this setting. A related work of model compression under the FL setting is Federated dropout <|cite_start|> (Reference: Enabling On-Device Training of Speech Recognition Models with Federated Dropout: Federated learning can be used to train machine learning models on the edge on local data that never leave devices, providing privacy by default. This presents a challenge pertaining to the communication and computation costs associated with clients' devices. These costs are strongly correlated with the size of the model being trained, and are significant for state-of-the-art automatic speech recognition models. We propose using federated dropout to reduce the size of client models while training a full-size model server-side. We provide empirical evidence of the effectiveness of federated dropout, and propose a novel approach to vary the dropout rate applied at each layer. Furthermore, we find that federated dropout enables a set of smaller sub-models within the larger model to independently have low word error rates, making it easier to dynamically adjust the size of the model deployed for inference.) <|cite_end|>. Unlike our proposed method, federated dropout randomly generates reduced model and performs training on full model. Another preliminary work, PruneFL <|cite_start|> (Reference: Model Pruning Enables Efficient Federated Learning on Edge Devices: Federated learning (FL) allows model training from local data collected by edge/mobile devices while preserving data privacy, which has wide applicability to image and vision applications. A challenge is that client devices in FL usually have much more limited computation and communication resources compared to servers in a datacenter. To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model. PruneFL includes initial pruning at a selected client and further pruning as part of the FL process. The model size is adapted during this process, which includes maximizing the approximate empirical risk reduction divided by the time of one FL round. Our experiments with various datasets on edge devices (e.g., Raspberry Pi) show that: (i) we significantly reduce the training time compared to conventional FL and various other pruning-based methods; (ii) the pruned model with automatically determined size converges to an accuracy that is very similar to the original model, and it is also a lottery ticket of the original model.) <|cite_end|>, also applies pruning to federated learning. It adopts sparse pruning instead of structural pruning as used in this work, so the resultant model will be less efficient when running on devices in practice. Moreover, we evaluate the proposed FP with production-grade models and datasets, which better reflects the real condition of deployment.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{images/pipeline_new.png}
\caption{A federated round of the proposed Federated Pruning. The white circles denote removed parameters.}
\label{fig:pipeline}
\vspace{-6mm}
\end{figure}
In summary, this work has the following contributions:
\begin{itemize}
\item \textbf{Improving the efficiency of federated learning:} We propose Federated Pruning (FP) to leverage on-device data to effectively prune redundant parameters from models. The resultant smaller models require less on-device memory to train and lower bandwidth for transporting models.
\item \textbf{Exploring different pruning design decisions:} We explore and perform extensive ablation studies on two design decisions of pruning under federated learning: pruning patterns and pruning methods.
\item \textbf{Proposing a novel approach for adaptive sparsity:} We propose a novel adaptive per-layer sparsity approach that dynamically allocates the target global sparsity level to each layer. Therefore, there is no need to manually select the per-layer sparsity levels.
\item \textbf{Experimenting with production-grade environments: } We evaluate the proposed Federated Pruning with production-grade models and datasets, which better reflects the real condition of deployment.
\end{itemize}
\begin{algorithm}[t]
\centering
\footnotesize
\caption{\footnotesize Federated pruning. Initialize the server model with $w^0$ and the binary pruning mask $M$ with \textit{ones} like ($w^0$). The $K$ clients are selected and indexed by $k$, federated pruning rounds are indexed by $r$, and $n$ is the number of examples. \textit{Shrink}$(w, M)$ reduces the model size according to pruning mask $M$. \textit{Expand}$(w, M)$ maps the reduced model to original size. Related pruning methods include following functions: \textit{GetImportanceScore} ($w^{r}$), \textit{GenerateMask} ($w^{r}, r, S$). }\label{alg:federated_prune}
\begin{algorithmic}[1]
\State \textbf{Input:} Pre-trained dense ASR model: $w^0$
\State \hskip 3.05em Binary pruning mask: $M$ of \textit{ones} like ($w^0$)
\State \hskip 3.05em Target sparsity level: $S$
\State \hskip 3.05em FL rounds: $\Delta R, R^{fine-tune}, R^{end}$
\State \textbf{Output:} Sparse ASR model: $w^{R^{end}}$
\Function {FederatedPruning}{}
\State initial sparsity level $s\gets 0$
\For{each round $r = 0, 1, 2, ..., R^{fine-tune}-1$}
\If {$r\;mod\; \Delta R == 0$} \Comment{Every $\Delta R$ rounds}
\State \textit{GetImportanceScore} ($w^{r}$)
\State $M$ = \textit{GenerateMask} ($w^{r}, r, s$)
\If{$s<S$}
\Comment{Reaches refining phase if $s==S$}
\State increase $s$
\EndIf
\EndIf
\State $w^{r+1}$ = \textit{FPTrain($w^r, M$)}
\EndFor
\For{each round $r=R^{fine-tune}, ... , R^{end}$}
\Comment{Fine-turning}
\State{Reduce the server model with mask $M$}
\State{Training the reduced model with standard FL.}
\EndFor
\State \Return $w^{R^{end}}$
\EndFunction
\Function {FPTrain}{$w^r, M$}
\State $W^r \leftarrow$ \textit{Shrink}$(w^r, M)$ \Comment{Generate reduced model}
\State Randomly select $K$ clients
\State Server sends the reduced model $W^r$ to $K$ clients
\For{each client $k$ \textbf{in parallel}}
\State $\hat{W}_k^r\leftarrow$ \textit{ClientLocalUpdate}($k, W^r$)
\State $\Delta W_k^r = W^r - \hat{W}_k^r$
\State{Clients send $\Delta W_k^r$ to server}
\EndFor
\State $\Delta w_k^r \leftarrow$ \textit{Expand}$(\Delta W_k^r, M)$ \Comment{Map reduced updates}
\State $\bar{w}^r = \sum_{k=1}^{K} \frac{n_k}{n}\Delta w_k^r$ \Comment{Federated Averaging <|cite_start|> (Reference: Communication-Efficient Learning of Deep Networks from Decentralized Data: Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent.) <|cite_end|>}
\State $w^{r+1}=w^r-\eta \bar{w}^r$
\State \Return $w^{r+1}$
\EndFunction
\end{algorithmic}
\end{algorithm} <|paper_end|> | [
"<|reference_start|> An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. <|reference_end|>",
"<|reference_start|> Advances and Open Problems in Federated Learning: Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges. <|reference_end|>",
"<|reference_start|> Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency. <|reference_end|>",
"<|reference_start|> Communication-Efficient Learning of Deep Networks from Decentralized Data: Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent. <|reference_end|>"
] | [
1,
3,
5,
14
] | {"<|multi_cite_1_1|>": "arxiv-268282", "<|multi_cite_1_2|>": "arxiv-298443", "<|multi_cite_1_3|>": "arxiv-130835", "<|cite_2|>": "arxiv-238722", "<|cite_3|>": "arxiv-265924", "<|multi_cite_4_1|>": "arxiv-84906", "<|multi_cite_4_2|>": "arxiv-110216", "<|cite_5|>": "arxiv-151068", "<|multi_cite_6_1|>": "arxiv-84906", "<|multi_cite_6_2|>": "arxiv-110216", "<|multi_cite_7_1|>": "arxiv-129376", "<|multi_cite_7_2|>": "arxiv-191698", "<|cite_8|>": "arxiv-372344", "<|cite_9|>": "arxiv-225849", "<|cite_10|>": "arxiv-92430"} |
2208.10255 | <|paper_start|> Title: On the non-efficient PAC learnability of conjunctive queries
Abstract: On the non-efficient PAC learnability of conjunctive queries: This note serves three purposes: (i) we provide a self-contained exposition of the fact that conjunctive queries are not efficiently learnable in the Probably-Approximately-Correct (PAC) model, paying clear attention to the complicating fact that this concept class lacks the polynomial-size fitting property, a property that is tacitly assumed in much of the computational learning theory literature; (ii) we establish a strong negative PAC learnability result that applies to many restricted classes of conjunctive queries (CQs), including acyclic CQs for a wide range of notions of "acyclicity"; (iii) we show that CQs (and UCQs) are efficiently PAC learnable with membership queries.
Introduction
\label{sec:intro}
Conjunctive queries (CQs) are an extensively studied database query language
that plays a prominent role in database theory. CQs correspond precisely to
Datalog programs with a single non-recursive rule
and to the positive-existential-conjunctive fragment of first-order logic.
Since the evaluation problem for conjunctive queries is NP-complete, various tractable subclasses have been introduced and studied. These include
different variants of acyclicity, such as $\alpha$-acyclicity, $\beta$-acyclicity, $\gamma$-acyclicity, and Berge-acyclicity, which
form a
strict hierarchy with Berge-acyclicity being most restrictive <|cite_start|> (Reference: Degrees of acyclicity for hypergraphs and relational database schemes: Database schemes (winch, intuitively, are collecuons of table skeletons) can be wewed as hypergraphs (A hypergraph Is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) A class of "acychc" database schemes was recently introduced. A number of basic desirable propemes of database schemes have been shown to be equivalent to acyclicity This shows the naturalness of the concept. However, unlike the situation for ordinary, undirected graphs, there are several natural, noneqmvalent notions of acyclicity for hypergraphs (and hence for database schemes). Various desirable properties of database schemes are constdered and it is shown that they fall into several equivalence classes, each completely characterized by the degree of acycliclty of the scheme The results are also of interest from a purely graph-theoretic viewpomt. The original notion of aeyclicity has the countermtmtive property that a subhypergraph of an acychc hypergraph can be cyclic. This strange behavior does not occur for the new degrees of acyelicity that are considered.) <|cite_end|>. A landmark result by Grohe states
that a class of CQs is tractable if and only if the treewidth
of all CQs in it is bounded by a constant (under certain assumptions) <|cite_start|> (Reference: The complexity of homomorphism and constraint satisfaction problems
seen from the other side: We give a complexity theoretic classification of homomorphism problems for graphs and, more generally, relational structures obtained by restricting the left hand side structure in a homomorphism. For every class C of structures, let HOM(C, /spl I.bar/) be the problem of deciding whether a given structure A /spl isin/ C has a homomorphism to a given (arbitrary) structure B. We prove that, under some complexity theoretic assumption from parameterized complexity theory, HOM(C, /spl I.bar/) is in polynomial time if, and only if, the cores of all structures in C have bounded tree-width (as long as the structures in C only contain relations of bounded arity). Due to a well known correspondence between homomorphism problems and constraint satisfaction problems, our classification carries over to the latter.) <|cite_end|> <|cite_start|> (Reference: Tractable hypergraph properties for constraint satisfaction and conjunctive queries: An important question in the study of constraint satisfaction problems (CSP) is understanding how the graph or hypergraph describing the incidence structure of the constraints influences the complexity of the problem. For binary CSP instances (i.e., where each constraint involves only two variables), the situation is well understood: the complexity of the problem essentially depends on the treewidth of the graph of the constraints. However, this is not the correct answer if constraints with unbounded number of variables are allowed, and in particular, for CSP instances arising from query evaluation problems in database theory. Formally, if H is a class of hypergraphs, then let CSP(H) be CSP restricted to instances whose hypergraph is in H. Our goal is to characterize those classes of hypergraphs for which CSP(H) is polynomial-time solvable or fixed-parameter tractable, parameterized by the number of variables. Note that in the applications related to database query evaluation, we usually assume that the number of variables is much smaller than the size of the instance, thus parameterization by the number of variables is a meaningful question. The most general known property of H that makes CSP(H) polynomial-time solvable is bounded fractional hypertree width. Here we introduce a new hypergraph measure called submodular width, and show that bounded submodular width of H implies that CSP(H) is fixed-parameter tractable. In a matching hardness result, we show that if H has unbounded submodular width, then CSP(H) is not fixed-parameter tractable, unless the Exponential Time Hypothesis fails.) <|cite_end|>.
In this note, we consider the learnability
of CQs from labeled examples, in Valiant's well-known \emph{Probably Approximately Correct} (PAC) learning model <|cite_start|> (Reference: {A theory of the learnable: Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. In this paper we regard learning as the phenomenon of knowledge acquisition in the absence of explicit programming. We give a precise methodology for studying this phenomenon from a computational viewpoint. It consists of choosing an appropriate information gathering mechanism, the learning protocol, and exploring the class of concepts that can be learned using it in a reasonable (polynomial) number of steps. Although inherent algorithmic complexity appears to set serious limits to the range of concepts that can be learned, we show that there are some important nontrivial classes of propositional concepts that can be learned in a realistic sense.) <|cite_end|>.
We give a self-contained proof that the class of all CQs as well as all classes of acyclic CQs mentioned above are \emph{not} efficiently PAC learnable. While the
general idea of our proof is due to <|cite_start|> (Reference: Some Lower Bounds for the Computational Complexity of Inductive Logic Programming: ) <|cite_end|> <|cite_start|> (Reference: Learning Conjunctive Concepts in Structural Domains: ) <|cite_end|>,
we strengthen the result in several respects and present it in a form that is easily
accessible to modern-day database theorists.
\begin{figure*}
{
\centering
\begin{tabular}{cccl}
$e_1$ & $\cdots$ & $e_n$ & unlabeled examples drawn from some example distribution $D$ \\
$|$ && $|$ & \\
$(e_1,\mathit{lab}_1)$ & $\cdots$ & $(e_n,\mathit{lab}_n)$ & the same examples labeled according to some target CQ $q^*$ \\
$|$ && $|$ & \\
\multicolumn{3}{c}{\fbox{~~~~~~~ PAC algorithm ~~~~~~~}} \\
& $|$ & \\
& $q$ && hypothesis produced by PAC algorithm \\
& $|$ & \\
& $\error_{D,q^*}(q)$ && expected error of $q$ on examples drawn from $D$ and labeled
according to $q^*$
\end{tabular}
}
\caption{Graphical depiction of a PAC algorithm}
\label{fig:pac}
\end{figure*}
The result $q(I)$
of evaluating a $k$-ary CQ $q$ on a database instance $I$ is a
set of $k$-tuples of values from the active domain of~$I$.
An \emph{example}, then, is most naturally taken to be a
pair $(I,\textbf{a})$ where $I$ is a database instance and
$\textbf{a}$ is a $k$-tuple of values from the active domain of $I$. The example is \emph{positive}
if $\textbf{a}\in q(I)$ and \emph{negative} otherwise.
An \emph{efficient PAC algorithm} is a (possibly randomized) polynomial-time algorithm that takes as input a
set of examples
drawn from an unknown probability
distribution~$D$ and labeled as positive/negative
according to an unknown target CQ $q^*$ to be learned,
and that outputs a CQ $q$,
such that,
if the input sample is sufficiently large,
then with probability at least $1-\delta$, $q$ has expected
error at most $\epsilon$, meaning that if we draw an example $e$ from~$D$, then with probability $1-\epsilon$, $q$ and $q^*$ assign
the same label to $e$
(cf.~Figure~\ref{fig:pac}).
The required number of examples must furthermore be bounded by a
function polynomial in $|q^*|$, $1/\delta$, $1/\epsilon$, and the example size.
We give a precise definition in Section~\ref{sec:preliminaries}.
Note that since a PAC algorithm does not know the example distribution $D$, it must perform well
for \emph{all} distributions $D$.
In this sense, the PAC model captures a strong
form of distribution-independent learning.
Our main result is the following, stated, for simplicity, for
unary CQs:
\begin{theorem} \label{thm:main} (assuming $\text{RP} \neq \text{NP}$)
Let $C$ be any class of unary CQs over a fixed
schema $\mathbf{S}$ that contains at least one binary relation symbol and one unary relation symbol.
If $C$ includes
all path-CQs, then $C$
is not efficiently PAC learnable, even w.r.t.~single-instance example distributions.
\end{theorem}
Here, RP denotes the class of problems
solvable by a randomized algorithm with one-sided error that runs in polynomial time, and
by a \emph{path-CQ} we mean a unary CQ
of the form
\[ \begin{array}{@{}l@{}l}
q(x_1) \coloneq \exists x_2 \ldots x_n (&R(x_1,x_2)\land \cdots \land R(x_{n-1},x_n) \\[1mm]
& \land P(x_{j_1})\land\cdots\land P(x_{j_m}))
\end{array}
\]
where $R$ is a binary relation symbol and $P$ is a unary relation symbol. That is, a path-CQ is a very simple type of CQ that describes
an outgoing directed path decorated with a single unary relation symbol.
With a \emph{single-instance example distribution}, we mean
an example distribution $D$ such that for some database instance $I$, $D$ assigns non-zero probability
mass only to examples of the form $(I,\textbf{a})$.
This captures
the natural scenario of learning CQs from positive and
negative examples that all pertain to a single given database instance.
Clearly, efficient PAC learnability w.r.t.~all example distributions implies efficient PAC learnability w.r.t.~single-instance distributions.
Note that efficient PAC learnability is
not an anti-monotone property of query classes, and Theorem~\ref{thm:main} says more than just that path-CQs are not efficiently PAC learnable.
In particular, Theorem~\ref{thm:main} implies that \emph{the class of all CQs} is not efficiently PAC learnable, and the same is true for all classes of acyclic CQs mentioned above since path-CQs belong to all of these classes. Theorem~\ref{thm:main} also implies
non-efficient PAC learnability of concept expressions
in the description logics $\mathcal{EL}$ and $\mathcal{ELI}$ (even in the absence of a TBox), see e.g. <|cite_start|> (Reference: Learning Description Logic Concepts: When can Positive and Negative Examples be Separated? (Abstract): Learning description logic (DL) concepts from positive and negative examples given in the form of labeled data items in a KB has received significant attention in the literature. We study the fundamental question of when a separating DL concept exists and provide useful model-theoretic characterizations as well as complexity results for the associated decision problem. For expressive DLs such as ALC and ALCQI, our characterizations show a surprising link to the evaluation of ontology-mediated conjunctive queries. We exploit this to determine the combined complexity (between ExpTime and NExpTime) and data complexity (second level of the polynomial hierarchy) of separability. For the Horn DL EL, separability is ExpTime-complete both in combined and in data complexity while for its modest extension ELI it is even undecidable. Separability is also undecidable when the KB is formulated in ALC and the separating concept is required to be in EL or ELI.) <|cite_end|> and references therein.
It is worth comparing the notion of a \emph{PAC learning algorithm} to that of a \emph{fitting algorithm}.
Both types of algorithms take as input a set of labeled examples. A fitting
algorithm decides the existence of a CQ that agrees with the labels of the input examples.
The fitting problem is coNExpTime-complete
for CQs <|cite_start|> (Reference: Testing Expressibility Is Hard: ) <|cite_end|> <|cite_start|> (Reference: The Product Homomorphism Problem and Applications: The product homomorphism problem (PHP) takes as input a finite collection of structures A_1, ..., A_n and a structure B, and asks if there is a homomorphism from the direct product between A_1, A_2, ..., and A_n, to B. We pinpoint the computational complexity of this problem. Our motivation stems from the fact that PHP naturally arises in different areas of database theory. In particular, it is equivalent to the problem of determining whether a relation is definable by a conjunctive query, and the existence of a schema mapping that fits a given collection of positive and negative data examples. We apply our results to obtain complexity bounds for these problems.) <|cite_end|> and, in fact, is known to be hard already for some more restricted classes of acyclic CQs <|cite_start|> (Reference: The Product Homomorphism Problem and Applications: The product homomorphism problem (PHP) takes as input a finite collection of structures A_1, ..., A_n and a structure B, and asks if there is a homomorphism from the direct product between A_1, A_2, ..., and A_n, to B. We pinpoint the computational complexity of this problem. Our motivation stems from the fact that PHP naturally arises in different areas of database theory. In particular, it is equivalent to the problem of determining whether a relation is definable by a conjunctive query, and the existence of a schema mapping that fits a given collection of positive and negative data examples. We apply our results to obtain complexity bounds for these problems.) <|cite_end|> <|cite_start|> (Reference: Learning Description Logic Concepts: When can Positive and Negative Examples be Separated? (Abstract): Learning description logic (DL) concepts from positive and negative examples given in the form of labeled data items in a KB has received significant attention in the literature. We study the fundamental question of when a separating DL concept exists and provide useful model-theoretic characterizations as well as complexity results for the associated decision problem. For expressive DLs such as ALC and ALCQI, our characterizations show a surprising link to the evaluation of ontology-mediated conjunctive queries. We exploit this to determine the combined complexity (between ExpTime and NExpTime) and data complexity (second level of the polynomial hierarchy) of separability. For the Horn DL EL, separability is ExpTime-complete both in combined and in data complexity while for its modest extension ELI it is even undecidable. Separability is also undecidable when the KB is formulated in ALC and the separating concept is required to be in EL or ELI.) <|cite_end|>.
A PAC algorithm, on the other hand, produces a CQ that, with high probability,
has a low expected error, but is not required to fit the input examples.
Despite these differences, it is well-known that for concept classes that are both polynomial-time evaluable and have the polynomial-size fitting property (defined in Section~2), NP-hardness of the fitting problem implies the non-existence of an efficient PAC learning algorithm <|cite_start|> (Reference: Computational Limitations on Learning from Examples: The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distribution-free sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) Boolean threshold functions, and (c) Boolean formulas in which each variable occurs at most once. Relationships between learning of heuristics and finding approximate solutions to NP-hard optimization problems are given.) <|cite_end|>, see Proposition~\ref{prop:pac-vs-fitting} below.
Unfortunately, the concept class of CQs has neither of these properties. A main
difficulty of our proof of Theorem~\ref{thm:main} (which is nevertheless
based on a reduction from an NP-hard fitting problem)
is to find a way around this.
We also prove that PAC learnability of CQs can be recovered by extending
the PAC model with membership queries, known from Angluin's <|cite_start|> (Reference: Queries and concept learning: ) <|cite_end|>
model of exact learning. In a membership query, the learner chooses an example $(I,\mathbf{a})$ and asks an oracle to provide, in unit time,
the positive or negative labeling of $(I,\mathbf{a})$ according to the target query.
In Angluin's model of exact learning, CQs are known to not be efficiently learnable
with membership queries alone, but they are efficiently learnable when also
equivalence queries are admitted (the learner may give a hypothesis query
to the oracle and ask whether it is equivalent to the target query,
requesting a counterexample if this is not the case). The latter is implicit in <|cite_start|> (Reference: Learning schema mappings: A schema mapping is a high-level specification of the relationship between a source schema and a target schema. Recently, a line of research has emerged that aims at deriving schema mappings automatically or semi-automatically with the help of data examples, that is, pairs consisting of a source instance and a target instance that depict, in some precise sense, the intended behavior of the schema mapping. Several different uses of data examples for deriving, refining, or illustrating a schema mapping have already been proposed and studied.
In this article, we use the lens of computational learning theory to systematically investigate the problem of obtaining algorithmically a schema mapping from data examples. Our aim is to leverage the rich body of work on learning theory in order to develop a framework for exploring the power and the limitations of the various algorithmic methods for obtaining schema mappings from data examples. We focus on GAV schema mappings, that is, schema mappings specified by GAV (Global-As-View) constraints. GAV constraints are the most basic and the most widely supported language for specifying schema mappings. We present an efficient algorithm for learning GAV schema mappings using Angluin's model of exact learning with membership and equivalence queries. This is optimal, since we show that neither membership queries nor equivalence queries suffice, unless the source schema consists of unary relations only. We also obtain results concerning the learnability of schema mappings in the context of Valiant's well-known PAC (Probably-Approximately-Correct) learning model, and concerning the learnability of restricted classes of GAV schema mappings. Finally, as a byproduct of our work, we show that there is no efficient algorithm for approximating the shortest GAV schema mapping fitting a given set of examples, unless the source schema consists of unary relations only.) <|cite_end|>, an explicit proof can be found in <|cite_start|> (Reference: Conjunctive Queries: Unique Characterizations and Exact Learnability: We answer the question which conjunctive queries are uniquely characterized by polynomially many positive and negative examples, and how to construct such examples efficiently. As a consequence, we obtain a new efficient exact learning algorithm for a class of conjunctive queries. At the core of our contributions lie two new polynomial-time algorithms for constructing frontiers in the homomorphism lattice of finite structures. We also discuss implications for the unique characterizability and learnability of schema mappings and of description logic concepts.) <|cite_end|>, cf.~also <|cite_start|> (Reference: Learning Closed Horn Expressions: The paper studies the learnability of Horn expressions within the framework of learning from entailment , where the goal is to exactly identify some pre-fixed and unknown expression by making queries to membership and equivalence oracles. It is shown that a class that includes both range restricted Horn expressions (where terms in the conclusion also appear in the condition of a Horn clause) and constrained Horn expressions (where terms in the condition also appear in the conclusion of a Horn clause) is learnable. This extends previous results by showing that a larger class is learnable with better complexity bounds. A further improvement in the number of queries is obtained when considering the class of Horn expressions with inequalities on all syntactically distinct terms.) <|cite_end|>.
As pointed out in <|cite_start|> (Reference: Learning schema mappings: A schema mapping is a high-level specification of the relationship between a source schema and a target schema. Recently, a line of research has emerged that aims at deriving schema mappings automatically or semi-automatically with the help of data examples, that is, pairs consisting of a source instance and a target instance that depict, in some precise sense, the intended behavior of the schema mapping. Several different uses of data examples for deriving, refining, or illustrating a schema mapping have already been proposed and studied.
In this article, we use the lens of computational learning theory to systematically investigate the problem of obtaining algorithmically a schema mapping from data examples. Our aim is to leverage the rich body of work on learning theory in order to develop a framework for exploring the power and the limitations of the various algorithmic methods for obtaining schema mappings from data examples. We focus on GAV schema mappings, that is, schema mappings specified by GAV (Global-As-View) constraints. GAV constraints are the most basic and the most widely supported language for specifying schema mappings. We present an efficient algorithm for learning GAV schema mappings using Angluin's model of exact learning with membership and equivalence queries. This is optimal, since we show that neither membership queries nor equivalence queries suffice, unless the source schema consists of unary relations only. We also obtain results concerning the learnability of schema mappings in the context of Valiant's well-known PAC (Probably-Approximately-Correct) learning model, and concerning the learnability of restricted classes of GAV schema mappings. Finally, as a byproduct of our work, we show that there is no efficient algorithm for approximating the shortest GAV schema mapping fitting a given set of examples, unless the source schema consists of unary relations only.) <|cite_end|>, the fact that
CQs are efficiently exactly learnable with membership and equivalence
queries implies PAC learnability with membership queries and
an NP-oracle (cf. <|cite_start|> (Reference: Queries and concept learning: ) <|cite_end|>), where the NP-oracle is used for evaluating hypotheses on examples.
It was left open whether CQs are efficiently PAC learnable with membership queries \emph{without} an NP-oracle. We give an affirmative answer to this question
and show that it also extends to UCQs, that is, to disjunctions of conjunctive
queries.
\begin{theorem}\label{thm:main2}
Fix any schema $\mathbf{S}$ and $k\geq 0$. The class of all $k$-ary CQs over $\mathbf{S}$ is efficiently PAC learnable with membership queries. The same is true for the class of all $k$-ary UCQs over $\mathbf{S}$.
\end{theorem}
\subsection{Related work}
\label{sec:relwork}
Haussler <|cite_start|> (Reference: Learning Conjunctive Concepts in Structural Domains: ) <|cite_end|> shows that the class of Boolean CQs over a schema that contains
an unbounded number of unary relation symbols is not efficiently PAC-learnable
(unless $\text{RP}=\text{NP}$). The essential part of the proof is to show that
the fitting problem for the same concept class is NP-complete. Over a schema that consists
of unary relation symbols only, every CQ is trivially Berge-acyclic. Therefore, this
implies that efficient PAC learnability fails for acyclic Boolean CQs, for
any of the aforementioned notions of acyclicity.
The fact that Haussler's result is stated for Boolean CQs and Theorem~\ref{thm:main} is stated for unary CQs is an inessential difference (cf. <|cite_start|> (Reference: Some Lower Bounds for the Computational Complexity of Inductive Logic Programming: ) <|cite_end|>).
The fact that the proof in <|cite_start|> (Reference: Learning Conjunctive Concepts in Structural Domains: ) <|cite_end|> uses an unbounded number of unary relation symbols, however, is an important difference. Indeed, if one was to
consider Boolean queries over a fixed finite schema that consists of unary relation symbols only, then the resulting concept class would be finite and trivially PAC learnable.
Kietz <|cite_start|> (Reference: Some Lower Bounds for the Computational Complexity of Inductive Logic Programming: ) <|cite_end|> proves that the class of unary CQs over a schema that contains a single binary relation symbol and an
unbounded number of unary relation symbols is not PAC-learnable (unless $\text{RP}=\text{NP}$). Again the essential part of the proof is to show that
the fitting problem is NP-complete. Kietz's result already applies to path-CQs of length~1 with multiple unary relation symbols.
This is only possible because of the infinite schema, as, otherwise,
the concept class is again finite and trivially PAC learnable.
\revnote{
Cohen <|cite_start|> (Reference: The dual DFA learning problem (extended abstract): hardness results for programming by demonstration and learning first-order representations: We consider a dual version of the DFA paclearning problem, in which concepts are strings over a fixed alphabet, examples are DFAs, and a string s represents the set of all DFAs that accept it. It is shown that solving this problem is as hard as learning log-depth boolean circuits, even if the example DFAs are are always acyclic, leveled, and of logarithmic level width. Thus under cryptographic assumptions the dual DFA learning problem is hard. This result implies the hardness of several other more natural learning problems, including learning the description logic CLASSIC from subconcepts, and learning arity-two “determinate” functionfree Prolog clauses from ground clauses. The result also implies the hardness of two formal problems that are similar to problems studied in the area of “programming by demonstration”: learning straightline programs over a fixed operator set from input-output pairs, and learning straightline programs from inputoutput pairs traces, and “partial traces”.) <|cite_end|> proves that the class of
unary CQs over a schema that contains two binary relation
symbols is not PAC-predictable unless certain assumptions from
the field of cryptography fail.
In \emph{PAC prediction},
the output of the algorithm is not required to be a concept from the concept class, but instead must be any polynomial-time evaluable concept such
as a polynomial-time algorithm.
PAC learnability implies PAC predictability
for concept classes that are polynomial-time evaluable
(cf.~Remark~\ref{rem:prediction-vs-learning}).
Cohen's result already applies to
path-CQs (defined slightly differently than above, using two binary relation symbols and no
unary relation symbol -- this difference is inessential).
As a consequence, Cohen's result yields the restriction of Theorem~\ref{thm:main} to polynomial-time evaluable classes $C$
(such as the class of all acyclic CQs, under any of the mentioned notions of
acyclicity), under cryptographic assumptions.
Moreover, in contrast to PAC learnability, PAC predictability is an anti-monotone property of concept classes. Thus, Cohen's result also yields
Theorem~\ref{thm:main} for efficient PAC predictability in place of
efficient PAC learnability, again under cryptographic assumptions.
In an earlier paper <|cite_start|> (Reference: Cryptographic Limitations on Learning One-Clause Logic Programs: An active area of research in machine learning is learning logic programs from examples. This paper investigates formally the problem of learning a single Horn clause: we focus on generalizations of the language of constant-depth determinate clauses, which is used by several practical learning systems. We show first that determinate clauses of logarithmic depth are not learnable. Next we show that learning indeterminate clauses with at most k indeterminate variables is equivalent to learning DNF. Finally, we show that recursive constant-depth determinate clauses are not learnable. Our primary technical tool is the method of predictionpreserving reducibilities introduced by Pitt and Warmuth [1990]; as a consequence our results are independent of the representations used by the learning system.) <|cite_end|>, Cohen had proved a related but weaker result that
requires relation symbols of arity three. The work of Hirata <|cite_start|> (Reference: Prediction-hardness of acyclic conjunctive queries: ) <|cite_end|>, in a similar vain,
shows that there is even a fixed database on which efficient
PAC prediction (and thus also learning) of acyclic CQs is impossible -- a stronger condition than single-instance example distributions. The result, however, requires ternary relation symbols and CQs of unbounded arity.
We also remark that it follows from general results of Schapire, see Section~6.3 of <|cite_start|> (Reference: The Strength of Weak Learnability: ) <|cite_end|>, that any class of CQs that is NP-hard to evaluate is not efficiently PAC-predictable unless $\text{NP} \subseteq \text{P/poly}.$
}
We consider, in this note, classes of CQs defined through
acyclicity conditions. In the literature on inductive logic programming (ILP) various positive and negative
PAC learnability results have been obtained for classes of CQs
defined by different means (e.g., limitations on the use of existential variables, determinacy conditions pertaining to functional relations, and restricted variable depth). These are orthogonal to acyclicity. An overview can be found in \cite[Chapter 18]{ChengWolf:1997}.
In <|cite_start|> (Reference: Learning schema mappings: A schema mapping is a high-level specification of the relationship between a source schema and a target schema. Recently, a line of research has emerged that aims at deriving schema mappings automatically or semi-automatically with the help of data examples, that is, pairs consisting of a source instance and a target instance that depict, in some precise sense, the intended behavior of the schema mapping. Several different uses of data examples for deriving, refining, or illustrating a schema mapping have already been proposed and studied.
In this article, we use the lens of computational learning theory to systematically investigate the problem of obtaining algorithmically a schema mapping from data examples. Our aim is to leverage the rich body of work on learning theory in order to develop a framework for exploring the power and the limitations of the various algorithmic methods for obtaining schema mappings from data examples. We focus on GAV schema mappings, that is, schema mappings specified by GAV (Global-As-View) constraints. GAV constraints are the most basic and the most widely supported language for specifying schema mappings. We present an efficient algorithm for learning GAV schema mappings using Angluin's model of exact learning with membership and equivalence queries. This is optimal, since we show that neither membership queries nor equivalence queries suffice, unless the source schema consists of unary relations only. We also obtain results concerning the learnability of schema mappings in the context of Valiant's well-known PAC (Probably-Approximately-Correct) learning model, and concerning the learnability of restricted classes of GAV schema mappings. Finally, as a byproduct of our work, we show that there is no efficient algorithm for approximating the shortest GAV schema mapping fitting a given set of examples, unless the source schema consists of unary relations only.) <|cite_end|>, the authors study learnability of
\emph{GAV schema mappings}, which are closely related to
\emph{Unions of Conjunctive Queries (UCQs)}.
Specifically, it was proved in <|cite_start|> (Reference: Learning schema mappings: A schema mapping is a high-level specification of the relationship between a source schema and a target schema. Recently, a line of research has emerged that aims at deriving schema mappings automatically or semi-automatically with the help of data examples, that is, pairs consisting of a source instance and a target instance that depict, in some precise sense, the intended behavior of the schema mapping. Several different uses of data examples for deriving, refining, or illustrating a schema mapping have already been proposed and studied.
In this article, we use the lens of computational learning theory to systematically investigate the problem of obtaining algorithmically a schema mapping from data examples. Our aim is to leverage the rich body of work on learning theory in order to develop a framework for exploring the power and the limitations of the various algorithmic methods for obtaining schema mappings from data examples. We focus on GAV schema mappings, that is, schema mappings specified by GAV (Global-As-View) constraints. GAV constraints are the most basic and the most widely supported language for specifying schema mappings. We present an efficient algorithm for learning GAV schema mappings using Angluin's model of exact learning with membership and equivalence queries. This is optimal, since we show that neither membership queries nor equivalence queries suffice, unless the source schema consists of unary relations only. We also obtain results concerning the learnability of schema mappings in the context of Valiant's well-known PAC (Probably-Approximately-Correct) learning model, and concerning the learnability of restricted classes of GAV schema mappings. Finally, as a byproduct of our work, we show that there is no efficient algorithm for approximating the shortest GAV schema mapping fitting a given set of examples, unless the source schema consists of unary relations only.) <|cite_end|> that GAV schema
mappings are not efficiently PAC learnable, assuming $\text{RP} \neq \text{NP}$, on
source schemas that contain at least one relation symbol of arity at
least two, using a reduction of the non-PAC-learnability of
propositional formulas in positive DNF. This result immediately
implies that, for any schema $\mathbf{S}$ containing a relation
symbol of arity at least two, and for each $k\geq 0$,
the class of $k$-ary UCQs over $\mathbf{S}$ is not efficiently PAC learnable, assuming $\text{RP} \neq \text{NP}$. Additionally, in <|cite_start|> (Reference: Learning schema mappings: A schema mapping is a high-level specification of the relationship between a source schema and a target schema. Recently, a line of research has emerged that aims at deriving schema mappings automatically or semi-automatically with the help of data examples, that is, pairs consisting of a source instance and a target instance that depict, in some precise sense, the intended behavior of the schema mapping. Several different uses of data examples for deriving, refining, or illustrating a schema mapping have already been proposed and studied.
In this article, we use the lens of computational learning theory to systematically investigate the problem of obtaining algorithmically a schema mapping from data examples. Our aim is to leverage the rich body of work on learning theory in order to develop a framework for exploring the power and the limitations of the various algorithmic methods for obtaining schema mappings from data examples. We focus on GAV schema mappings, that is, schema mappings specified by GAV (Global-As-View) constraints. GAV constraints are the most basic and the most widely supported language for specifying schema mappings. We present an efficient algorithm for learning GAV schema mappings using Angluin's model of exact learning with membership and equivalence queries. This is optimal, since we show that neither membership queries nor equivalence queries suffice, unless the source schema consists of unary relations only. We also obtain results concerning the learnability of schema mappings in the context of Valiant's well-known PAC (Probably-Approximately-Correct) learning model, and concerning the learnability of restricted classes of GAV schema mappings. Finally, as a byproduct of our work, we show that there is no efficient algorithm for approximating the shortest GAV schema mapping fitting a given set of examples, unless the source schema consists of unary relations only.) <|cite_end|>, the authors completely map out the (non-)learnability of restricted classes of UCQs definable by conditions on their Gaifman graph.
There is also another line of work on PAC learnability of conjunctive queries <|cite_start|> (Reference: A Dichotomy Theorem for Learning Quantified Boolean Formulas: ) <|cite_end|> <|cite_start|> (Reference: Learnability of quantified formulas: ) <|cite_end|> <|cite_start|> (Reference: Learnability of Solutions to Conjunctive Queries: The Full Dichotomy: The problem of learning the solution space of an unknown formula has been studied in multiple embodiments in computational learning theory. In this article, we study a family of such learning problems; this family contains, for each relational structure, the problem of learning the solution space of an unknown conjunctive query evaluated on the structure. A progression of results aimed to classify the learnability of each of the problems in this family, and thus far a culmination thereof was a positive learnability result generalizing all previous ones. This article completes the classification program towards which this progression of results strived, by presenting a negative learnability result that complements the mentioned positive learnability result. In order to obtain our negative result, we make use of universal-algebraic concepts, and our result is phrased in terms of the varietal property of non-congruence modularity.) <|cite_end|> that is somewhat different in nature: one fixes a schema $\mathbf{S}$ and an $\mathbf{S}$-instance $I$ and defines a concept class where the concepts are now all relations over the active domain of $I$ definable by a $k$-ary CQ (as evaluated in $I$).
PAC learning for various classes of Boolean formulas, such as
3-CNF, can be seen as a special case of this framework, for
a specific choice of schema $\mathbf{S}$ and (two-element) instance $I$, where $k$ then
corresponds to the number of Boolean variables.
Since, for a fixed choice of $k$,
this yields a finite concept class,
in this setting, one is interested
in the complexity of PAC learning as a function of $k$.
The mentioned papers establish effective dichotomies, showing that, depending on the
choice of $\mathbf{S}$ and $I$,
this concept class is either efficiently PAC learnable in $k$ or is not even efficiently PAC predictable with membership queries in $k$ (under suitable cryptographic assumptions). See also Remark~\ref{rem:prediction-vs-learning} below. <|paper_end|> | [
"<|reference_start|> The complexity of homomorphism and constraint satisfaction problems\nseen from the other side: We give a complexity theoretic classification of homomorphism problems for graphs and, more generally, relational structures obtained by restricting the left hand side structure in a homomorphism. For every class C of structures, let HOM(C, /spl I.bar/) be the problem of deciding whether a given structure A /spl isin/ C has a homomorphism to a given (arbitrary) structure B. We prove that, under some complexity theoretic assumption from parameterized complexity theory, HOM(C, /spl I.bar/) is in polynomial time if, and only if, the cores of all structures in C have bounded tree-width (as long as the structures in C only contain relations of bounded arity). Due to a well known correspondence between homomorphism problems and constraint satisfaction problems, our classification carries over to the latter. <|reference_end|>",
"<|reference_start|> Learning Description Logic Concepts: When can Positive and Negative Examples be Separated? (Abstract): Learning description logic (DL) concepts from positive and negative examples given in the form of labeled data items in a KB has received significant attention in the literature. We study the fundamental question of when a separating DL concept exists and provide useful model-theoretic characterizations as well as complexity results for the associated decision problem. For expressive DLs such as ALC and ALCQI, our characterizations show a surprising link to the evaluation of ontology-mediated conjunctive queries. We exploit this to determine the combined complexity (between ExpTime and NExpTime) and data complexity (second level of the polynomial hierarchy) of separability. For the Horn DL EL, separability is ExpTime-complete both in combined and in data complexity while for its modest extension ELI it is even undecidable. Separability is also undecidable when the KB is formulated in ALC and the separating concept is required to be in EL or ELI. <|reference_end|>",
"<|reference_start|> Learning Conjunctive Concepts in Structural Domains: <|reference_end|>",
"<|reference_start|> A Dichotomy Theorem for Learning Quantified Boolean Formulas: <|reference_end|>"
] | [
1,
10,
20,
29
] | {"<|cite_1|>": "ss-2273859", "<|multi_cite_2_1|>": "ss-966634", "<|multi_cite_2_2|>": "arxiv-9860", "<|cite_3|>": "ss-1846452", "<|multi_cite_4_1|>": "ss-1772245", "<|multi_cite_4_2|>": "ss-1146105", "<|cite_5|>": "ss-1268400", "<|multi_cite_6_1|>": "ss-927029", "<|multi_cite_6_2|>": "ss-2590058", "<|multi_cite_7_1|>": "ss-2590058", "<|multi_cite_7_3|>": "ss-1268400", "<|cite_8|>": "ss-734080", "<|cite_9|>": "ss-1262245", "<|cite_10|>": "ss-740830", "<|cite_11|>": "arxiv-284723", "<|cite_12|>": "ss-1772246", "<|cite_13|>": "ss-740830", "<|cite_14|>": "ss-1262245", "<|cite_15|>": "ss-1146105", "<|cite_16|>": "ss-1772245", "<|cite_17|>": "ss-1146105", "<|cite_18|>": "ss-1772245", "<|cite_19|>": "ss-1772247", "<|cite_20|>": "ss-1772248", "<|cite_21|>": "ss-1772249", "<|cite_22|>": "ss-1341464", "<|cite_23|>": "ss-740830", "<|cite_24|>": "ss-740830", "<|cite_25|>": "ss-740830", "<|multi_cite_26_1|>": "ss-1772250", "<|multi_cite_26_2|>": "ss-2103477", "<|multi_cite_26_3|>": "ss-1772251"} |
1605.02287-1 | <|cite_start|> (Reference: ARIADNE: A dynamic indoor signal map construction and localization system: Location determination of mobile users within a building has attracted much attention lately due to its many applications in mobile networking including network intrusion detection problems. However, it is challenging due to the complexities of the indoor radio propagation characteristics exacerbated by the mobility of the user. A common practice is to mechanically generate a table showing the radio signal strength at different known locations in the building. A mobile user's location at an arbitrary point in the building is determined by measuring the signal strength at the location in question and determining the location by referring to the above table using a LMSE (least mean square error) criterion. Obviously, this is a very tedious and time consuming task. This paper proposes a novel and automated location determination method called ARIADNE. Using a two dimensional construction floor plan and only a single actual signal strength measurement, ARIADNE generates an estimated signal strength map comparable to those generated manually by actual measurements. Given the signal measurements for a mobile, a proposed clustering algorithm searches that signal strength map to determine the current mobile's location. The results from ARIADNE are comparable and may even be superior to those from existing localization schemes.) <|cite_end|>and Aroma <|cite_start|> (Reference: Synthetic generation of radio maps for device-free passive localization: In this paper, we present the design, implementation, and evaluation of a system that automatically constructs accurate radio maps for device-free WLAN localization systems. The system is capable of generating deterministic and probabilistic radio maps for localization systems. Our system uses 3D ray tracing enhanced with the uniform theory of diffraction (UTD) to model the electric field behavior and the human shadowing effect. We present our system architecture and describe the details of its different components. We also propose an optional module, location-0 correction, that can significantly enhances the system accuracy and reduces its dependence on the 3D model details by using just one signal strength sample. Our experiments in a real testbed show that the predicted signal strength differs from the measurements by a maximum average absolute error of $2.77$ dB achieving a maximum localization error of $3.13$m and $2.84$m for both the deterministic and probabilistic radio maps, respectively. In addition, the results show that our system is not sensitive to the 3D model details.) <|cite_end|>systems use ray tracing models to get better RSS estimation in 2D and 3D respectively. These systems, however, still require samples from the environment to calibrate the model, require high computational requirements for ray tracing,
and the model parameters still depend on the specific phone used for measurements.
\emph{\sys{}, in contrast, requires neither user participation nor calibration measurements, and handles heterogeneous devices naturally.}
\subsection{Range-based Systems}
Range-based systems, typically used in sensor networks, e.g. <|cite_start|> (Reference: A taxonomy of localization schemes for wireless sensor networks: Knowledge of nodes’ locations is an essential requirement for many applications. This paper surveys the current state of the art for localization schemes in sensor networks. We present a taxonomy of the localization schemes for sensor networks based on different features. We then describe how the current localization schemes for sensor networks map to these different features. We believe that this paper serves as an introduction for researchers interested in the area of localization schemes for sensor networks as well as in evaluating the characteristics of a location system needed by a particular application or the suitability of an existing location system for) <|cite_end|> <|cite_start|> (Reference: A distributed localization scheme for wireless sensor networks with improved grid-scan and vector-based refinement: Localization is a fundamental and essential issue for wireless sensor networks (WSNs). Existing localization algorithms can be categorized as either range-based or range-free schemes. Range-based schemes are not suitable for WSNs because of their irregularity of radio propagation and their cost of additional devices. In contrast, range-free schemes do not need to use received signal strength to estimate distances and only need simple and cheap hardware, and are thus more suitable for WSNs. However, existing range-free schemes are too costly and not accurate enough or are not scalable. To improve previous work, we present a fully distributed range-free localization scheme for WSNs. We assume that only a few sensor nodes, called anchors, know their locations, and the remaining (normal) nodes need to estimate their own locations by gathering nearby neighboring information. We propose an improved grid-scan algorithm to find the estimated locations of the normal nodes. Furthermore, we derive a vector-based refinement scheme to improve the accuracy of the estimated locations. Analysis, simulation, and experiment results show that our scheme outperforms the other range-free schemes even when the communication radius is irregular.) <|cite_end|> <|cite_start|> (Reference: An effective area-based localization algorithm for wireless networks: Area-based localization algorithms use only the position of some reference nodes, called anchors, to estimate the residence area of the remaining nodes. Existing algorithms use a triangle, a ring or a circle as the geometric shape that defines the node's residence area. However, existing algorithms suffer from two major problems: (1) in some cases, they might make wrong decisions about a node presence inside a given area, or (2) they require high anchor density to achieve a low location estimation error and high ratio of localizable nodes. In this paper, we overcome these shortcomings by introducing a new approach for determining the node's residence area that is geometrically shaped as a half-symmetric lens. A novel half symmetric lens based localization algorithm (HSL) is proposed. HSL yields smaller residence areas, and consequently, better location accuracy than contemporary schemes. HSL further employs Voronoi diagram in order to boost the percentage of localizable nodes. The performance of HSL is validated through mathematical analysis, extensive simulations experiments and prototype implementation. The validation results confirm that HSL achieves better location accuracy and higher ratio of localizable nodes compared to competing algorithms.) <|cite_end|> <|cite_start|> (Reference: Ecolocation: a sequence based technique for RF localization in wireless sensor networks: In this paper we present a novel sequence-based RF localization algorithm called Ecolocation. Our algorithm determines the location of unknown nodes by examining the ordered sequence of received signal strength (RSS) measurements taken at multiple reference nodes. We employ a constraint-based approach that provides for robust location decoding even in the presence of random RSS fluctuations due to multi-path fading and shadowing. Through extensive systematic simulations, and a representative set of real mote experiments, we show that over a wide range of settings Ecolocation performs better than other state of the art approaches in terms of localization accuracy and precision.) <|cite_end|> <|cite_start|> (Reference: An improved localization algorithm in wireless sensor network: An improved localization algorithm W-Ecolocation in wireless sensor network is proposed. When there is a high erroneous constraints rate, the Ecolocation algorithm which only considers the locations with maximum computing number of matched constraints cannot obtain the optimal location estimate. The W-Ecolocation makes use of more locations with great computing number of matched constraints and takes the weighted average as the location estimate of the unknown node. The simulation results show that W-Ecolocation performs better than Ecolocation in terms of location error and location precision. A new approach to compute the number of matched constraints is also presented. It reduces both the time complexity and space complexity.) <|cite_end|>, attempt to completely eliminate the dependence on a fingerprint by solving the localization problem geometrically.
To accomplish this, however, it is assumed in <|cite_start|> (Reference: A distributed localization scheme for wireless sensor networks with improved grid-scan and vector-based refinement: Localization is a fundamental and essential issue for wireless sensor networks (WSNs). Existing localization algorithms can be categorized as either range-based or range-free schemes. Range-based schemes are not suitable for WSNs because of their irregularity of radio propagation and their cost of additional devices. In contrast, range-free schemes do not need to use received signal strength to estimate distances and only need simple and cheap hardware, and are thus more suitable for WSNs. However, existing range-free schemes are too costly and not accurate enough or are not scalable. To improve previous work, we present a fully distributed range-free localization scheme for WSNs. We assume that only a few sensor nodes, called anchors, know their locations, and the remaining (normal) nodes need to estimate their own locations by gathering nearby neighboring information. We propose an improved grid-scan algorithm to find the estimated locations of the normal nodes. Furthermore, we derive a vector-based refinement scheme to improve the accuracy of the estimated locations. Analysis, simulation, and experiment results show that our scheme outperforms the other range-free schemes even when the communication radius is irregular.) <|cite_end|>that the radio propagation model is a perfect circle which is unrealistic and leads to errors. To address this issue, the system proposed in <|cite_start|> (Reference: An effective area-based localization algorithm for wireless networks: Area-based localization algorithms use only the position of some reference nodes, called anchors, to estimate the residence area of the remaining nodes. Existing algorithms use a triangle, a ring or a circle as the geometric shape that defines the node's residence area. However, existing algorithms suffer from two major problems: (1) in some cases, they might make wrong decisions about a node presence inside a given area, or (2) they require high anchor density to achieve a low location estimation error and high ratio of localizable nodes. In this paper, we overcome these shortcomings by introducing a new approach for determining the node's residence area that is geometrically shaped as a half-symmetric lens. A novel half symmetric lens based localization algorithm (HSL) is proposed. HSL yields smaller residence areas, and consequently, better location accuracy than contemporary schemes. HSL further employs Voronoi diagram in order to boost the percentage of localizable nodes. The performance of HSL is validated through mathematical analysis, extensive simulations experiments and prototype implementation. The validation results confirm that HSL achieves better location accuracy and higher ratio of localizable nodes compared to competing algorithms.) <|cite_end|>uses a a half-symmetric lens primitive while <|cite_start|> (Reference: Ecolocation: a sequence based technique for RF localization in wireless sensor networks: In this paper we present a novel sequence-based RF localization algorithm called Ecolocation. Our algorithm determines the location of unknown nodes by examining the ordered sequence of received signal strength (RSS) measurements taken at multiple reference nodes. We employ a constraint-based approach that provides for robust location decoding even in the presence of random RSS fluctuations due to multi-path fading and shadowing. Through extensive systematic simulations, and a representative set of real mote experiments, we show that over a wide range of settings Ecolocation performs better than other state of the art approaches in terms of localization accuracy and precision.) <|cite_end|> <|cite_start|> (Reference: An improved localization algorithm in wireless sensor network: An improved localization algorithm W-Ecolocation in wireless sensor network is proposed. When there is a high erroneous constraints rate, the Ecolocation algorithm which only considers the locations with maximum computing number of matched constraints cannot obtain the optimal location estimate. The W-Ecolocation makes use of more locations with great computing number of matched constraints and takes the weighted average as the location estimate of the unknown node. The simulation results show that W-Ecolocation performs better than Ecolocation in terms of location error and location precision. A new approach to compute the number of matched constraints is also presented. It reduces both the time complexity and space complexity.) <|cite_end|>depend on the sensors sequence pattern.
However, these systems can only be applied to homogeneous networks like wireless sensor networks, where it is assumed that all sensors have the same properties as well as that all nodes can hear and communicate with each other, which is not the case with the passive iBeacons. In addition, they are evaluated through simulations. \emph{\sys{}, on the other hand, handles different transmit powers
and can work with nodes that do not interchange signals, which is the typical case of the iBeacons and WiFi APs. Moreover, it is designed to be robust to the dynamics of realistic environments.
}
\subsection{Heterogeneity Handling Techniques}
To handle the devices heterogeneity, a number of approaches have been proposed that either map the fingerprint constructed by one device to another <|cite_start|> (Reference: Implications of device diversity for organic localization: Many indoor localization methods are based on the association of 802.11 wireless RF signals from wireless access points (WAPs) with location labels. An “organic” RF positioning system relies on regular users, not dedicated surveyors, to build the map of RF fingerprints to location labels. However, signal variation due to device heterogeneity may degrade localization performance. We analyze the diversity of those signal characteristics pertinent to indoor localization — signal strength and AP detection — as measured by a variety of 802.11 devices. We first analyze signal strength diversity, and show that pairwise linear transformation alone does not solve the problem. We propose kernel estimation with a wide kernel width to reduce the difference in probability estimates. We also investigate diversity in access point detection. We demonstrate that localization performance may degrade significantly when AP detection rate is used as a feature for localization, and correlate the loss of performance to a device dissimilarity measure captured by Kullback-Leibler divergence. Based on this analysis, we show that using only signal strength, without incorporating negative evidence, achieves good localization performance when devices are heterogeneous.) <|cite_end|> <|cite_start|> (Reference: Practical Robust Localization Over Large-Scale 802.11 Wireless Networks: We demonstrate a system built using probabilistic techniques that allows for remarkably accurate localization across our entire office building using nothing more than the built-in signal intensity meter supplied by standard 802.11 cards. While prior systems have required significant investments of human labor to build a detailed signal map, we can train our system by spending less than one minute per office or region, walking around with a laptop and recording the observed signal intensities of our building's unmodified base stations. We actually collected over two minutes of data per office or region, about 28 man-hours of effort. Using less than half of this data to train the localizer, we can localize a user to the precise, correct location in over 95% of our attempts, across the entire building. Even in the most pathological cases, we almost never localize a user any more distant than to the neighboring office. A user can obtain this level of accuracy with only two or three signal intensity measurements, allowing for a high frame rate of localization results. Furthermore, with a brief calibration period, our system can be adapted to work with previously unknown user hardware. We present results demonstrating the robustness of our system against a variety of untrained time-varying phenomena, including the presence or absence of people in the building across the day. Our system is sufficiently robust to enable a variety of location-aware applications without requiring special-purpose hardware or complicated training and calibration procedures.) <|cite_end|> <|cite_start|> (Reference: Covariate Shift in Hilbert Space: A Solution via Sorrogate Kernels: Covariate shift is an unconventional learning scenario in which training and testing data have different distributions. A general principle to solve the problem is to make the training data distribution similar to that of the test domain, such that classifiers computed on the former generalize well to the latter. Current approaches typically target on sample distributions in the input space, however, for kernel-based learning methods, the algorithm performance depends directly on the geometry of the kernel-induced feature space. Motivated by this, we propose to match data distributions in the Hilbert space, which, given a pre-defined empirical kernel map, can be formulated as aligning kernel matrices across domains. In particular, to evaluate similarity of kernel matrices defined on arbitrarily different samples, the novel concept of surrogate kernel is introduced based on the Mercer's theorem. Our approach caters the model adaptation specifically to kernel-based learning mechanism, and demonstrates promising results on several real-world applications.) <|cite_end|>or use features that are device-independent <|cite_start|> (Reference: Enabling wide deployment of GSM localization over heterogeneous phones: Wide deployment of GSM based location determination systems is a critical step towards moving existing systems to the real world. The main barrier towards this critical step is the heterogeneity of existing types of cell phones which results in different readings of received signal strength. Specially, in the context of fingerprinting localization where offline phases are needed for system training and different types of phones may be used in the offline and the online phases. Therefore, a mapping function, that maps the RSSI values between different types of cell phones, is inevitably needed. A trivial solution is to build a radio map for each type of phone. Obviously, this solution can neither scale in terms of number of phone types nor fingerprint size. In this paper, we address this problem by proposing the following two-way approach: A mathematical approach that maps RSSI values of different types of phones using linear transformation with regression, or logging ratios of readings instead of absolute values. We have empirically evaluated the proposed approach on Android-based phones. Our experimental results show that applying our approach can improve location accuracy with at least 127.84% in multiple cell tower configuration and at least 22.11% in the single cell tower configuration compared to the state-of-the-art GSM localization systems.) <|cite_end|> <|cite_start|> (Reference: Transferring multi-device localization models using latent multi-task learning: In this paper, we propose a latent multi-task learning algorithm to solve the multi-device indoor localization problem. Traditional indoor localization systems often assume that the collected signal data distributions are fixed, and thus the localization model learned on one device can be used on other devices without adaptation. However, by empirically studying the signal variation over different devices, we found this assumption to be invalid in practice. To solve this problem, we treat multiple devices as multiple learning tasks, and propose a multi-task learning algorithm. Different from algorithms assuming that the hypotheses learned from the original data space for related tasks can be similar, we only require the hypotheses learned in a latent feature space are similar. To establish our algorithm, we employ an alternating optimization approach to iteratively learn feature mappings and multi-task regression models for the devices. We apply our latent multi-task learning algorithm to real-world indoor localization data and demonstrate its effectiveness.) <|cite_end|>. For the first category, linear <|cite_start|> (Reference: Implications of device diversity for organic localization: Many indoor localization methods are based on the association of 802.11 wireless RF signals from wireless access points (WAPs) with location labels. An “organic” RF positioning system relies on regular users, not dedicated surveyors, to build the map of RF fingerprints to location labels. However, signal variation due to device heterogeneity may degrade localization performance. We analyze the diversity of those signal characteristics pertinent to indoor localization — signal strength and AP detection — as measured by a variety of 802.11 devices. We first analyze signal strength diversity, and show that pairwise linear transformation alone does not solve the problem. We propose kernel estimation with a wide kernel width to reduce the difference in probability estimates. We also investigate diversity in access point detection. We demonstrate that localization performance may degrade significantly when AP detection rate is used as a feature for localization, and correlate the loss of performance to a device dissimilarity measure captured by Kullback-Leibler divergence. Based on this analysis, we show that using only signal strength, without incorporating negative evidence, achieves good localization performance when devices are heterogeneous.) <|cite_end|>, non-linear <|cite_start|> (Reference: Practical Robust Localization Over Large-Scale 802.11 Wireless Networks: We demonstrate a system built using probabilistic techniques that allows for remarkably accurate localization across our entire office building using nothing more than the built-in signal intensity meter supplied by standard 802.11 cards. While prior systems have required significant investments of human labor to build a detailed signal map, we can train our system by spending less than one minute per office or region, walking around with a laptop and recording the observed signal intensities of our building's unmodified base stations. We actually collected over two minutes of data per office or region, about 28 man-hours of effort. Using less than half of this data to train the localizer, we can localize a user to the precise, correct location in over 95% of our attempts, across the entire building. Even in the most pathological cases, we almost never localize a user any more distant than to the neighboring office. A user can obtain this level of accuracy with only two or three signal intensity measurements, allowing for a high frame rate of localization results. Furthermore, with a brief calibration period, our system can be adapted to work with previously unknown user hardware. We present results demonstrating the robustness of our system against a variety of untrained time-varying phenomena, including the presence or absence of people in the building across the day. Our system is sufficiently robust to enable a variety of location-aware applications without requiring special-purpose hardware or complicated training and calibration procedures.) <|cite_end|>, and probabilistic <|cite_start|> (Reference: Covariate Shift in Hilbert Space: A Solution via Sorrogate Kernels: Covariate shift is an unconventional learning scenario in which training and testing data have different distributions. A general principle to solve the problem is to make the training data distribution similar to that of the test domain, such that classifiers computed on the former generalize well to the latter. Current approaches typically target on sample distributions in the input space, however, for kernel-based learning methods, the algorithm performance depends directly on the geometry of the kernel-induced feature space. Motivated by this, we propose to match data distributions in the Hilbert space, which, given a pre-defined empirical kernel map, can be formulated as aligning kernel matrices across domains. In particular, to evaluate similarity of kernel matrices defined on arbitrarily different samples, the novel concept of surrogate kernel is introduced based on the Mercer's theorem. Our approach caters the model adaptation specifically to kernel-based learning mechanism, and demonstrates promising results on several real-world applications.) <|cite_end|>mappings have been applied with different accuracies. Device-independent features approaches either use specific features, e.g. the ratio between different APs RSS <|cite_start|> (Reference: Enabling wide deployment of GSM localization over heterogeneous phones: Wide deployment of GSM based location determination systems is a critical step towards moving existing systems to the real world. The main barrier towards this critical step is the heterogeneity of existing types of cell phones which results in different readings of received signal strength. Specially, in the context of fingerprinting localization where offline phases are needed for system training and different types of phones may be used in the offline and the online phases. Therefore, a mapping function, that maps the RSSI values between different types of cell phones, is inevitably needed. A trivial solution is to build a radio map for each type of phone. Obviously, this solution can neither scale in terms of number of phone types nor fingerprint size. In this paper, we address this problem by proposing the following two-way approach: A mathematical approach that maps RSSI values of different types of phones using linear transformation with regression, or logging ratios of readings instead of absolute values. We have empirically evaluated the proposed approach on Android-based phones. Our experimental results show that applying our approach can improve location accuracy with at least 127.84% in multiple cell tower configuration and at least 22.11% in the single cell tower configuration compared to the state-of-the-art GSM localization systems.) <|cite_end|>, or try to learn the device independent features automatically (e.g. by applying latent multi-task learning to labeled data <|cite_start|> (Reference: Transferring multi-device localization models using latent multi-task learning: In this paper, we propose a latent multi-task learning algorithm to solve the multi-device indoor localization problem. Traditional indoor localization systems often assume that the collected signal data distributions are fixed, and thus the localization model learned on one device can be used on other devices without adaptation. However, by empirically studying the signal variation over different devices, we found this assumption to be invalid in practice. To solve this problem, we treat multiple devices as multiple learning tasks, and propose a multi-task learning algorithm. Different from algorithms assuming that the hypotheses learned from the original data space for related tasks can be similar, we only require the hypotheses learned in a latent feature space are similar. To establish our algorithm, we employ an alternating optimization approach to iteratively learn feature mappings and multi-task regression models for the devices. We apply our latent multi-task learning algorithm to real-world indoor localization data and demonstrate its effectiveness.) <|cite_end|>).
These techniques, however, require calibration between the different devices and the mapping function may not always be accurate, affecting the technique's accuracy. \emph{\sys{} on the other hand, depends on the relative relation of RSS with distance, which is device-independent and therefore does not require any calibration.
} <|paper_end|> | [
"<|reference_start|> Synthetic generation of radio maps for device-free passive localization: In this paper, we present the design, implementation, and evaluation of a system that automatically constructs accurate radio maps for device-free WLAN localization systems. The system is capable of generating deterministic and probabilistic radio maps for localization systems. Our system uses 3D ray tracing enhanced with the uniform theory of diffraction (UTD) to model the electric field behavior and the human shadowing effect. We present our system architecture and describe the details of its different components. We also propose an optional module, location-0 correction, that can significantly enhances the system accuracy and reduces its dependence on the 3D model details by using just one signal strength sample. Our experiments in a real testbed show that the predicted signal strength differs from the measurements by a maximum average absolute error of $2.77$ dB achieving a maximum localization error of $3.13$m and $2.84$m for both the deterministic and probabilistic radio maps, respectively. In addition, the results show that our system is not sensitive to the 3D model details. <|reference_end|>",
"<|reference_start|> An improved localization algorithm in wireless sensor network: An improved localization algorithm W-Ecolocation in wireless sensor network is proposed. When there is a high erroneous constraints rate, the Ecolocation algorithm which only considers the locations with maximum computing number of matched constraints cannot obtain the optimal location estimate. The W-Ecolocation makes use of more locations with great computing number of matched constraints and takes the weighted average as the location estimate of the unknown node. The simulation results show that W-Ecolocation performs better than Ecolocation in terms of location error and location precision. A new approach to compute the number of matched constraints is also presented. It reduces both the time complexity and space complexity. <|reference_end|>",
"<|reference_start|> Implications of device diversity for organic localization: Many indoor localization methods are based on the association of 802.11 wireless RF signals from wireless access points (WAPs) with location labels. An “organic” RF positioning system relies on regular users, not dedicated surveyors, to build the map of RF fingerprints to location labels. However, signal variation due to device heterogeneity may degrade localization performance. We analyze the diversity of those signal characteristics pertinent to indoor localization — signal strength and AP detection — as measured by a variety of 802.11 devices. We first analyze signal strength diversity, and show that pairwise linear transformation alone does not solve the problem. We propose kernel estimation with a wide kernel width to reduce the difference in probability estimates. We also investigate diversity in access point detection. We demonstrate that localization performance may degrade significantly when AP detection rate is used as a feature for localization, and correlate the loss of performance to a device dissimilarity measure captured by Kullback-Leibler divergence. Based on this analysis, we show that using only signal strength, without incorporating negative evidence, achieves good localization performance when devices are heterogeneous. <|reference_end|>",
"<|reference_start|> Covariate Shift in Hilbert Space: A Solution via Sorrogate Kernels: Covariate shift is an unconventional learning scenario in which training and testing data have different distributions. A general principle to solve the problem is to make the training data distribution similar to that of the test domain, such that classifiers computed on the former generalize well to the latter. Current approaches typically target on sample distributions in the input space, however, for kernel-based learning methods, the algorithm performance depends directly on the geometry of the kernel-induced feature space. Motivated by this, we propose to match data distributions in the Hilbert space, which, given a pre-defined empirical kernel map, can be formulated as aligning kernel matrices across domains. In particular, to evaluate similarity of kernel matrices defined on arbitrarily different samples, the novel concept of surrogate kernel is introduced based on the Mercer's theorem. Our approach caters the model adaptation specifically to kernel-based learning mechanism, and demonstrates promising results on several real-world applications. <|reference_end|>"
] | [
1,
6,
16,
18
] | {"<|multi_cite_1_1|>": "ss-1008940", "<|multi_cite_1_2|>": "ss-1008941", "<|multi_cite_1_3|>": "ss-1005212", "<|multi_cite_1_4|>": "ss-1005216", "<|multi_cite_1_5|>": "ss-1008942", "<|multi_cite_1_6|>": "ss-1438095", "<|multi_cite_1_7|>": "arxiv-16730", "<|multi_cite_1_8|>": "ss-1008943", "<|multi_cite_1_9|>": "ss-1005215", "<|cite_2|>": "ss-1008944", "<|multi_cite_3_1|>": "ss-1008945", "<|multi_cite_3_2|>": "ss-1049447", "<|multi_cite_3_3|>": "ss-1702741", "<|multi_cite_4_1|>": "ss-1005213", "<|multi_cite_4_2|>": "ss-679152", "<|multi_cite_4_3|>": "ss-1008946", "<|multi_cite_5_1|>": "ss-1008945", "<|multi_cite_5_2|>": "ss-1049447", "<|multi_cite_6_1|>": "ss-1008947", "<|multi_cite_6_2|>": "ss-957386", "<|multi_cite_6_3|>": "ss-1008948", "<|multi_cite_6_4|>": "ss-1008949", "<|cite_7|>": "ss-1008950", "<|cite_8|>": "ss-679153", "<|cite_9|>": "ss-803926", "<|cite_10|>": "ss-1008951", "<|multi_cite_11_1|>": "ss-1008945", "<|multi_cite_11_2|>": "arxiv-99818", "<|multi_cite_11_3|>": "arxiv-81892", "<|multi_cite_11_4|>": "ss-1008952", "<|multi_cite_11_5|>": "arxiv-72862", "<|multi_cite_11_6|>": "arxiv-51290", "<|cite_12|>": "ss-1008944", "<|multi_cite_13_1|>": "ss-1008945", "<|multi_cite_13_2|>": "ss-1049447", "<|multi_cite_13_3|>": "ss-1702741", "<|multi_cite_13_4|>": "arxiv-72862", "<|multi_cite_13_5|>": "arxiv-51290", "<|multi_cite_14_1|>": "ss-679152", "<|multi_cite_14_2|>": "ss-1008946", "<|multi_cite_14_3|>": "ss-1005213", "<|multi_cite_15_1|>": "ss-1008945", "<|multi_cite_15_2|>": "ss-1049447", "<|multi_cite_15_3|>": "ss-1008953", "<|cite_16|>": "ss-1008944", "<|multi_cite_17_1|>": "ss-1049447", "<|multi_cite_17_2|>": "ss-1008945", "<|cite_18|>": "ss-1702741", "<|cite_19|>": "arxiv-68482", "<|multi_cite_20_1|>": "ss-679152", "<|multi_cite_20_2|>": "ss-1008946", "<|multi_cite_20_3|>": "ss-1005213", "<|cite_21|>": "ss-679152", "<|cite_22|>": "ss-1008946", "<|cite_23|>": "ss-1005213", "<|multi_cite_24_1|>": "ss-1008954", "<|multi_cite_24_2|>": "ss-1008955", "<|multi_cite_24_3|>": "ss-1008956", "<|multi_cite_24_4|>": "ss-1008957", "<|multi_cite_24_5|>": "ss-1008958", "<|cite_25|>": "ss-1008955", "<|cite_26|>": "ss-1008956", "<|multi_cite_27_1|>": "ss-1008957", "<|multi_cite_27_2|>": "ss-1008958", "<|multi_cite_28_1|>": "ss-1008959", "<|multi_cite_28_2|>": "ss-957386", "<|multi_cite_28_3|>": "ss-1008948", "<|multi_cite_29_1|>": "ss-1008947", "<|multi_cite_29_2|>": "ss-1008949", "<|cite_30|>": "ss-1008959", "<|cite_31|>": "ss-957386", "<|cite_32|>": "ss-1008948", "<|cite_33|>": "ss-1008947", "<|cite_34|>": "ss-1008949"} |
2402.17442 | <|paper_start|> Title: Insights from the Usage of the Ansible Lightspeed Code Completion Service
Abstract: Insights from the Usage of the Ansible Lightspeed Code Completion Service: The availability of Large Language Models (LLMs) which can generate code, has made it possible to create tools that improve developer productivity. Integrated development environments or IDEs which developers use to write software are often used as an interface to interact with LLMs. Although many such tools have been released, almost all of them focus on general-purpose programming languages. Domain-specific languages, such as those crucial for Information Technology (IT) automation, have not received much attention. Ansible is one such YAML-based IT automation-specific language. Ansible Lightspeed is an LLM-based service designed explicitly to generate Ansible YAML given natural language prompt. This paper first presents the design and implementation of the Ansible Lightspeed service. We then evaluate its utility to developers using diverse indicators, including extended utilization, analysis of user rejected suggestions, as well as analysis of user sentiments. The analysis is based on data collected for 10,696 real users including 3,910 returning users. The code for Ansible Lightspeed service and the analysis framework is made available for others to use. To our knowledge, our study is the first to involve thousands of users in evaluating code assistants for domain-specific languages. We propose an improved version of user acceptance rate and we are the first code completion tool to present N-Day user retention figures. With our findings we provide insights into the effectiveness of small, dedicated models in a domain-specific context. We hope this work serves as a reference for software engineering and machine learning researchers exploring code completion services for domain-specific languages in particular and programming languages in general.
Introduction
\label{sec: intro}
Ansible is a YAML (a plain-text data serialization language) based domain-specific language dedicated to IT automation. A typical Ansible project consists of YAML files, which are organized into \emph{playbooks}~(programs) and \emph{roles}~(libraries).
Figure~\ref{fig:playbook_structure} shows an Ansible playbook with it's sub-components.
Playbooks consist of \emph{plays}, which are a mapping between hosts and the tasks (sequential execution units) that run on those hosts.
Tasks contain a natural language description in the form of a \emph{name field}, a module name defining the action to execute, and keys~(or options) configuring the action.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/ansible_playbook.png}
\caption{A typical Ansible playbook structure.}
\label{fig:playbook_structure}
\end{figure}
Ansible Lightspeed is a generative AI service that utilizes the IBM Watson Code Assistant (WCA) to produce code recommendations based on Ansible best practices.
WCA for Ansible is an extension of Ansible Wisdom <|cite_start|> (Reference: Automated Code generation for Information Technology Tasks in YAML through Large Language Models: The recent improvement in code generation capabilities due to the use of large language models has mainly benefited general purpose programming languages. Domain specific languages, such as the ones used for IT Automation, have received far less attention, despite involving many active developers and being an essential component of modern cloud platforms. This work focuses on the generation of Ansible-YAML, a widely used markup language for IT Automation. We present Ansible Wisdom, a natural-language to Ansible-YAML code generation tool, aimed at improving IT automation productivity. Ansible Wisdom is a transformer-based model, extended by training with a new dataset containing Ansible-YAML. We also develop two novel performance metrics for YAML and Ansible to capture the specific characteristics of this domain. Results show that Ansible Wisdom can accurately generate Ansible script from natural language prompts with performance comparable or better than existing state of the art code generation models. In few-shot settings we asses the impact of training with Ansible, YAML data and compare with different baselines including Codex-Davinci-002. We also show that after finetuning, our Ansible specific model (BLEU: 66.67) can outperform a much larger Codex-Davinci-002 (BLEU: 50.4) model, which was evaluated in few shot settings.) <|cite_end|>, which is a transformer decoder model of 350 million parameters trained on custom natural language, code and Ansible data.
An example of a user interaction with Ansible Lightspeed is presented Figure~\ref{fig:ansible_lightspeed_flow}.
With Ansible Lightspeed, you can build an Ansible playbook step by step, by providing a natural language description of an Ansible task using its name field. The description becomes a prompt for the model to generate the code of the task.
Code generation models, and some of the systems that utilize them, have emerged as powerful tools for software developers and system analysts <|cite_start|> (Reference: Github Copilot: Durante el programa de ingeniería de sistemas se ha podido trabajar con muchas herramientas las cuales facilitan las enseñanzas de los temas que están enfocados al desarrollo del software al igual que el cloud computing. En esta ocasión se hablará de un nuevo método que salió recientemente, se llama GitHub Copilot; es un asistente que sirve para escribir código basado en machine learn, o como les gusta decirle en GitHub, es una aplicación de pair programming basado en IA (inteligencia artificial). Para comprender más este software, se relaciona más como un Intellisense (auto completa el código) pero más eficiente y va mejorando a medida que transcurren sus actualizaciones. Por otro lado, no solamente hace pequeñas sugerencias sino predice lo que quieres completar y te brinda funciones complementarias con múltiples variantes. Incluso pude llegar a escribir el código a partir de comentarios en el lenguaje que se requiera. En el aula ha sido de gran ayuda para explicar los temas que se enfocan en la inteligencia artificial y programación a nuevos estudiantes, debido a que con el lenguaje “común” se puede realizar una estructura similar al pseudocódigo. De igual manera, se puede comprender los modelos de servicios en la nube, como, por ejemplo: Software as service (SaaS), porque copilot recopila todos los repositorios públicos de GitHub que es similar a un data warehouse el cual abastece a la IA. El objetivo principal de la presentación es demostrar cómo se puede optimizar el desarrollo de un software con la ayuda de GitHub Copilot, también explicar sus ventajas al igual que su arquitectura en la nube, con el objetivo de que todos los ingenieros se puedan apoyar a través de esta herramienta. Esta nueva tecnología aún no ha completado su funcionalidad a los usuarios en general, sin embargo, para acceder se requiere solicitar su uso a GitHub Copilot. Actualmente muy pocos desarrolladores tienen posibilidades de acceso a este programa; se espera que en los próximos meses salga su producción a los demás usuarios. Hoy en día con este software durante 6 meses aproximadamente, se ha demostrado que casi ha mejorado desde sus etapas iniciales de lanzamiento, es por tal motivo que hay una mejor experiencia en el uso diario de la misma.) <|cite_end|>.
Studies show that AI tools for general-purpose programming languages improve productivity and also suggest that novice programmers may benefit more from such tools <|cite_start|> (Reference: The Impact of AI on Developer Productivity: Evidence from GitHub Copilot: Generative AI tools hold promise to increase human productivity. This paper presents results from a controlled experiment with GitHub Copilot, an AI pair programmer. Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. The treatment group, with access to the AI pair programmer, completed the task 55.8% faster than the control group. Observed heterogenous effects show promise for AI pair programmers to help people transition into software development careers.) <|cite_end|> <|cite_start|> (Reference: Grounded Copilot: How Programmers Interact with Code-Generating Models: Powered by recent advances in code-generating models, AI assistants like Github Copilot promise to change the face of programming forever. But what is this new face of programming? We present the first grounded theory analysis of how programmers interact with Copilot, based on observing 20 participants--with a range of prior experience using the assistant--as they solve diverse programming tasks across four languages. Our main finding is that interactions with programming assistants are bimodal: in acceleration mode, the programmer knows what to do next and uses Copilot to get there faster; in exploration mode, the programmer is unsure how to proceed and uses Copilot to explore their options. Based on our theory, we provide recommendations for improving the usability of future AI programming assistants.) <|cite_end|>. <|cite_start|> (Reference: Automated Code generation for Information Technology Tasks in YAML through Large Language Models: The recent improvement in code generation capabilities due to the use of large language models has mainly benefited general purpose programming languages. Domain specific languages, such as the ones used for IT Automation, have received far less attention, despite involving many active developers and being an essential component of modern cloud platforms. This work focuses on the generation of Ansible-YAML, a widely used markup language for IT Automation. We present Ansible Wisdom, a natural-language to Ansible-YAML code generation tool, aimed at improving IT automation productivity. Ansible Wisdom is a transformer-based model, extended by training with a new dataset containing Ansible-YAML. We also develop two novel performance metrics for YAML and Ansible to capture the specific characteristics of this domain. Results show that Ansible Wisdom can accurately generate Ansible script from natural language prompts with performance comparable or better than existing state of the art code generation models. In few-shot settings we asses the impact of training with Ansible, YAML data and compare with different baselines including Codex-Davinci-002. We also show that after finetuning, our Ansible specific model (BLEU: 66.67) can outperform a much larger Codex-Davinci-002 (BLEU: 50.4) model, which was evaluated in few shot settings.) <|cite_end|> showed that a relatively small model fine-tuned on high quality Ansible data can outperform a much larger and more general model on Natural Language to Ansible task generation on benchmark data.
But to our knowledge, there has been no published study that has involved thousands of users on the use of code assistants for domain-specific languages.
Ansible Lightspeed is available since June ~2023, and thousands of developers have interacted with the system and consented to provide valuable feedback.
In this paper we analyze the interactions and feedback of users of the closed beta and free Tech Preview versions of Ansible Lightspeed, who consented to share their data.
We first analyze usage trends on a temporal basis, to understand when Ansible Lightspeed is used.
Next we see if users continue to use Ansible Lightspeed and if so, how regularly.
Most importantly, we check to what degree users accept suggestions made by Ansible Lightspeed.
This acceptance rate is a crucial comparison metric with other code completion tools in the industry.
It is also essential for us to understand how accurate the accepted suggestions were from the user's perspective.
Which is why we perform an edit analysis of the accepted suggestions and come up with a more stringent criteria for acceptance, where we consider a suggestion as accepted only if the user does not edit most or critical parts of the suggestion after accepting it.
We analyze and share user feedback of the Ansible Lightspeed service.
The key highlights of our analysis are the following:
\begin{itemize}
\item We show that Lightspeed usage is the highest during working hours, on weekdays, suggesting that developers are probably using Ansible Lightspeed for work.
\item Using N-day user retention metric, we show that a Lightspeed user is 4 times more likely than an average Android app user and 3 times more likely than an iOS app user, to keep using Lightspeed after 30 days.
\item We are the first code completion tool to share user retention statistics. This can be used as a baseline for similar tools in the future.
\item We show that the acceptance rate of Ansible Lightspeed is higher than that of similar, but more general, IDE based code completion tools.
\item We develop a much more stringent acceptance criteria and show that even with this stringent criteria, Ansible Lightspeed's acceptance rate is higher than the simple acceptance rate of other code completion tools.
\item A majority of the users give Ansible Lightspeed 4 and above rating on a scale of 1 to 5, with 5 being the highest rating.
\end{itemize}
With this analysis we demonstrate how domain expertise, knowledge of language utilization and the application of best practices for system design and user experience can significantly improve recommendation results and user acceptance rates.
Given that we have a domain specific model, targeting a well defined use case, we are able to achieve these relatively high user acceptance rates with a relatively small model of size 350 million parameters.
The paper makes the following contributions:
\begin{itemize}
\item A detailed description of the Ansible Lightspeed system and analysis framework~(Section~\ref{sec: lightspeed})
\item An analysis along multiple dimensions of the usage of a service by thousands of users~(Section~\ref{sec: analysis}).
\item A much more accurate way to measure acceptance rate of model code suggestions by users, by accounting for user edits~(Section~\ref{subsec: acceptance}).
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{images/lightspeed_prompt.png}
\caption{User writes the prompt.}
\label{fig:ansible_lightspeed_flow_first}
\end{subfigure}
\vspace{1em}
\vfill
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{images/lightspeed_inline_suggestion.png}
\caption{Ansible Lightspeed provides an inline suggestion.}
\label{fig:ansible_lightspeed_flow_second}
\end{subfigure}
\vspace{1em}
\vfill
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{images/lightspeed_accepted_suggestion.png}
\caption{User accepts the suggestion by hitting the `Tab' key.}
\label{fig:ansible_lightspeed_flow_third}
\end{subfigure}
\caption{Ansible Lightspeed's workflow in the text editor.}
\label{fig:ansible_lightspeed_flow}
\end{figure}
Related Work
\label{sec: related}
LLMs have shown remarkable ability to generate code, with many recent models like GPT-4 <|cite_start|> (Reference: {{GPT-4: Большие языковые модели (LLM) продемонстрировали замечательные возможности в понимании и генерации естественного языка в различных областях, включая медицину. В статье представлена оценка GPT-4 на основе двух точек зрения на проблему применения этой языковой модели: разработчиков из OpenAI, Microsoft и пользователей-медиков из двух европейских проектов. За последние несколько лет LLM, обученные на массивных междисциплинарных корпусах, стали мощными строительными блоками при создании систем, ориентированных на решение конкретных задач. В статье рассматривается три задачи: медицинское образование, работоспособность ChatGPT-4 в клинике (консультации, записи стенограмм беседы врача и пациента), и конкретные уровни точности диагностики (разные области медицины). Ответ на поставленный вопрос о необходимости медицинского GPT есть в мире, -он положительный.) <|cite_end|>, Llama <|cite_start|> (Reference: Llama 2: Open Foundation and Fine-Tuned Chat Models: In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.) <|cite_end|>, StarCoder <|cite_start|> (Reference: StarCoder: may the source be with you!: The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40\% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.) <|cite_end|> and others performing very well on code evaluation benchmarks like HumanEval <|cite_start|> (Reference: Evaluating Large Language Models Trained on Code: We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.) <|cite_end|> and MBPP <|cite_start|> (Reference: Program Synthesis with Large Language Models: This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6 percent of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8 percent accuracy. Going further, we study the model's ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model's initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input.) <|cite_end|>.
A lot of work has also been done on developing coding assistants for IDEs based on LLMs.
These would include tools that are internal to a company and are only accessible by internal users and those that are openly available. analyze CodeCompose, an internal code completion tool at Meta which is based on InCoder LLM <|cite_start|> (Reference: InCoder: A Generative Model for Code Infilling and Synthesis: Code is seldom written in a single left-to-right pass and is instead repeatedly edited and refined. We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) as well as editing (via infilling). InCoder is trained to generate code files from a large corpus of permissively licensed code, where regions of code have been randomly masked and moved to the end of each file, allowing code infilling with bidirectional context. Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming. We find that the ability to condition on bidirectional context substantially improves performance on these tasks, while still performing comparably on standard program synthesis benchmarks in comparison to left-to-right only models pretrained at similar scale. The InCoder models and code are publicly released. https://sites.google.com/view/incoder-code-models) <|cite_end|>, and show that the acceptance rate is about 22\% across 9 programming languages for approximately 16k users.
Similarly analyze a code completion tool at Google and measure an acceptance rate of 25-34\% over 10k+ Google-internal developers. <|cite_start|> (Reference: IntelliCode Compose: Code Generation Using Transformer: In software development through integrated development environments (IDEs), code completion is one of the most widely used features. Nevertheless, majority of integrated development environments only support completion of methods and APIs, or arguments. In this paper, we introduce IntelliCode Compose $-$ a general-purpose multilingual code completion tool which is capable of predicting sequences of code tokens of arbitrary types, generating up to entire lines of syntactically correct code. It leverages state-of-the-art generative transformer model trained on 1.2 billion lines of source code in Python, $C\#$, JavaScript and TypeScript programming languages. IntelliCode Compose is deployed as a cloud-based web service. It makes use of client-side tree-based caching, efficient parallel implementation of the beam search decoder, and compute graph optimizations to meet edit-time completion suggestion requirements in the Visual Studio Code IDE and Azure Notebook. Our best model yields an average edit similarity of $86.7\%$ and a perplexity of 1.82 for Python programming language.) <|cite_end|> introduce and evaluate IntelliCode Compose on multiple programming languages, but they use evaluation based on edit-distance compared to the ground truth and do not mention any user acceptance metrics or the total number of users.
None of these works mention user sentiment or user retention over an extended period of time.
The openly available coding assistants would be GitHub Copilot <|cite_start|> (Reference: Github Copilot: Durante el programa de ingeniería de sistemas se ha podido trabajar con muchas herramientas las cuales facilitan las enseñanzas de los temas que están enfocados al desarrollo del software al igual que el cloud computing. En esta ocasión se hablará de un nuevo método que salió recientemente, se llama GitHub Copilot; es un asistente que sirve para escribir código basado en machine learn, o como les gusta decirle en GitHub, es una aplicación de pair programming basado en IA (inteligencia artificial). Para comprender más este software, se relaciona más como un Intellisense (auto completa el código) pero más eficiente y va mejorando a medida que transcurren sus actualizaciones. Por otro lado, no solamente hace pequeñas sugerencias sino predice lo que quieres completar y te brinda funciones complementarias con múltiples variantes. Incluso pude llegar a escribir el código a partir de comentarios en el lenguaje que se requiera. En el aula ha sido de gran ayuda para explicar los temas que se enfocan en la inteligencia artificial y programación a nuevos estudiantes, debido a que con el lenguaje “común” se puede realizar una estructura similar al pseudocódigo. De igual manera, se puede comprender los modelos de servicios en la nube, como, por ejemplo: Software as service (SaaS), porque copilot recopila todos los repositorios públicos de GitHub que es similar a un data warehouse el cual abastece a la IA. El objetivo principal de la presentación es demostrar cómo se puede optimizar el desarrollo de un software con la ayuda de GitHub Copilot, también explicar sus ventajas al igual que su arquitectura en la nube, con el objetivo de que todos los ingenieros se puedan apoyar a través de esta herramienta. Esta nueva tecnología aún no ha completado su funcionalidad a los usuarios en general, sin embargo, para acceder se requiere solicitar su uso a GitHub Copilot. Actualmente muy pocos desarrolladores tienen posibilidades de acceso a este programa; se espera que en los próximos meses salga su producción a los demás usuarios. Hoy en día con este software durante 6 meses aproximadamente, se ha demostrado que casi ha mejorado desde sus etapas iniciales de lanzamiento, es por tal motivo que hay una mejor experiencia en el uso diario de la misma.) <|cite_end|>, Tab9, Replit, Amazon CodeWhisperer and Ansible Lightspeed, among others.
Among these, GitHub Copilot has been widely studied since it's release. <|cite_start|> (Reference: {An empirical evaluation of GitHub copilot's code suggestions: GitHub and OpenAI recently launched Copilot, an “AI pair programmer” that utilizes the power of Natural Language Processing, Static Analysis, Code Synthesis, and Artificial Intelligence. Given a natural language description of the target functionality, Copilot can generate corresponding code in several programming languages. In this paper, we perform an empirical study to evaluate the correctness and understandability of Copilot's suggested code. We use 33 LeetCode questions to create queries for Copilot in four different programming languages. We evaluate the correctness of the corresponding 132 Copilot solutions by running LeetCode's provided tests, and evaluate understandability using SonarQube's cyclomatic complexity and cognitive complexity metrics. We find that Copilot's Java suggestions have the highest correctness score (57%) while JavaScript is the lowest (27%). Overall, Copilot's suggestions have low complexity with no notable differences between the programming languages. We also find some potential Copilot shortcomings, such as generating code that can be further simplified and code that relies on undefined helper methods.) <|cite_end|> test Copilot on 33 LeetCode questions in four programming languages. <|cite_start|> (Reference: Expectation vs. Experience: Evaluating the Usability of Code
Generation Tools Powered by Large Language Models: Recent advances in Large Language Models (LLM) have made automatic code generation possible for real-world programming tasks in general-purpose programming languages such as Python. However, there are few human studies on the usability of these tools and how they fit the programming workflow. In this work, we conducted a within-subjects user study with 24 participants to understand how programmers use and perceive Copilot, a LLM-based code generation tool. We found that, while Copilot did not necessarily improve the task completion time or success rate, most participants preferred to use Copilot in daily programming tasks, since Copilot often provided a useful starting point and saved the effort of searching online. However, participants did face difficulties in understanding, editing, and debugging code snippets generated by Copilot, which significantly hindered their task-solving effectiveness. Finally, we highlighted several promising directions for improving the design of Copilot based on our observations and participants’ feedback.) <|cite_end|> perform a more user centred evaluation with 24 users to see how programmers use and perceive Copilot. <|cite_start|> (Reference: Productivity Assessment of Neural Code Completion: Neural code synthesis has reached a point where snippet generation is accurate enough to be considered for integration into human software development workflows. Commercial products aim to increase programmers' productivity, without being able to measure it directly. In this case study, we asked users of GitHub Copilot about its impact on their productivity, and sought to find a reflection of their perception in directly measurable user data. We find that the rate with which shown suggestions are accepted, rather than more specific metrics regarding the persistence of completions in the code over time, drives developers' perception of productivity.) <|cite_end|> perform an in-depth study of user acceptance, similar to what we do, but for multiple programming languages and show that the acceptance rate of Copilot suggestions for different user categories, for different programming languages, is approximately 20\%-30\%. <|cite_start|> (Reference: The Impact of AI on Developer Productivity: Evidence from GitHub Copilot: Generative AI tools hold promise to increase human productivity. This paper presents results from a controlled experiment with GitHub Copilot, an AI pair programmer. Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. The treatment group, with access to the AI pair programmer, completed the task 55.8% faster than the control group. Observed heterogenous effects show promise for AI pair programmers to help people transition into software development careers.) <|cite_end|> study the impact of Co-pilot on the speed of programmers and find that AI pair programmers are 55.8\% faster in implementing an HTTP server in JavaScript.
However they do not provide information about acceptance rate of AI pair programmers. <|cite_start|> (Reference: Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT: Context: AI-assisted code generation tools have become increasingly prevalent in software engineering, offering the ability to generate code from natural language prompts or partial code inputs. Notable examples of these tools include GitHub Copilot, Amazon CodeWhisperer, and OpenAI's ChatGPT. Objective: This study aims to compare the performance of these prominent code generation tools in terms of code quality metrics, such as Code Validity, Code Correctness, Code Security, Code Reliability, and Code Maintainability, to identify their strengths and shortcomings. Method: We assess the code generation capabilities of GitHub Copilot, Amazon CodeWhisperer, and ChatGPT using the benchmark HumanEval Dataset. The generated code is then evaluated based on the proposed code quality metrics. Results: Our analysis reveals that the latest versions of ChatGPT, GitHub Copilot, and Amazon CodeWhisperer generate correct code 65.2%, 46.3%, and 31.1% of the time, respectively. In comparison, the newer versions of GitHub CoPilot and Amazon CodeWhisperer showed improvement rates of 18% for GitHub Copilot and 7% for Amazon CodeWhisperer. The average technical debt, considering code smells, was found to be 8.9 minutes for ChatGPT, 9.1 minutes for GitHub Copilot, and 5.6 minutes for Amazon CodeWhisperer. Conclusions: This study highlights the strengths and weaknesses of some of the most popular code generation tools, providing valuable insights for practitioners. By comparing these generators, our results may assist practitioners in selecting the optimal tool for specific tasks, enhancing their decision-making process.) <|cite_end|> perform a comparative study of GitHub Copilot, Amazon CodeWhisperer and ChatGPT <|cite_start|> (Reference: Explicitly Introducing ChatGPT into First-year Programming Practice: Challenges and Impact: ChatGPT has recently emerged to aid in computer programming education due to its cutting-edge functionality of generating program code, debugging, etc. This research firstly focused on what the ethical considerations and solutions are for the first-year IT students who use ChatGPT to write computer programs in an integrated assignment. And then it turned to investigate what impact ChatGPT has on the programming competencies and learning outcomes of students compared to those who do not use ChatGPT. To ensure students use ChatGPT ethically, guidance was provided together with a declaration form of ethically using ChatGPT in each phase of the assignment. Next, we collected and analyzed a survey and their declaration from students and compared student effort, time spent, and performance outcomes from those who were using and without using ChatGPT. Based on the findings, we concluded that although ChatGPT provides an opportunity to the first-year students to learn programming in the way of analysis, synthesis, and evaluation, many students still prefer the conventional way of learning programming in terms of comprehension and application. We argued that since our students in the programming course are always from different academic background levels, we would continue to use both ChatGPT and conventional eLearning resources to meet different learning requirements.) <|cite_end|> (sibling model of InstructGPT~å <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|>) in terms of code quality metrics like code correctness, code security, code reliability and code maintainability but do not consider acceptance of code by real users.
A lot of studies on coding assistants do not measure their performance in terms of acceptance rate.
Among those that do, we found that Ansible Lightspeed has the highest initial acceptance rate of 48.6\%.
We did not find any user retention figures for any of the existing code completion systems or for the VS Code plugins in the VS Code marketplace.
In the absence of more relevant baselines, we compare Lightspeed retention rate with that of Android and iOS apps. <|paper_end|> | [
"<|reference_start|> StarCoder: may the source be with you!: The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40\\% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license. <|reference_end|>",
"<|reference_start|> Evaluating Large Language Models Trained on Code: We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics. <|reference_end|>",
"<|reference_start|> IntelliCode Compose: Code Generation Using Transformer: In software development through integrated development environments (IDEs), code completion is one of the most widely used features. Nevertheless, majority of integrated development environments only support completion of methods and APIs, or arguments. In this paper, we introduce IntelliCode Compose $-$ a general-purpose multilingual code completion tool which is capable of predicting sequences of code tokens of arbitrary types, generating up to entire lines of syntactically correct code. It leverages state-of-the-art generative transformer model trained on 1.2 billion lines of source code in Python, $C\\#$, JavaScript and TypeScript programming languages. IntelliCode Compose is deployed as a cloud-based web service. It makes use of client-side tree-based caching, efficient parallel implementation of the beam search decoder, and compute graph optimizations to meet edit-time completion suggestion requirements in the Visual Studio Code IDE and Azure Notebook. Our best model yields an average edit similarity of $86.7\\%$ and a perplexity of 1.82 for Python programming language. <|reference_end|>",
"<|reference_start|> Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent. <|reference_end|>"
] | [
7,
8,
11,
19
] | {"<|cite_2|>": "arxiv-502133", "<|multi_cite_3_1|>": "ss-748001", "<|multi_cite_4_1|>": "arxiv-481222", "<|multi_cite_4_2|>": "arxiv-430575", "<|cite_19|>": "arxiv-502133", "<|cite_6|>": "ss-1343995", "<|cite_7|>": "arxiv-524224", "<|cite_8|>": "arxiv-503777", "<|cite_9|>": "arxiv-353610", "<|cite_10|>": "arxiv-361418", "<|cite_11|>": "arxiv-412738", "<|cite_22|>": "arxiv-265886", "<|cite_12|>": "ss-748001", "<|cite_23|>": "ss-1198982", "<|cite_24|>": "ss-821797", "<|cite_25|>": "arxiv-419295", "<|cite_26|>": "arxiv-481222", "<|cite_27|>": "arxiv-498773", "<|cite_17|>": "ss-680740", "<|cite_18|>": "arxiv-403294"} |
2011.04006 | <|paper_start|> Title: Long Range Arena: A Benchmark for Efficient Transformers
Abstract: Long Range Arena: A Benchmark for Efficient Transformers: Transformers do not scale very well to long sequence lengths largely because of quadratic self-attention complexity. In the recent months, a wide spectrum of efficient, fast Transformers have been proposed to tackle this problem, more often than not claiming superior or comparable model quality to vanilla Transformer models. To this date, there is no well-established consensus on how to evaluate this class of models. Moreover, inconsistent benchmarking on a wide spectrum of tasks and datasets makes it difficult to assess relative model quality amongst many models. This paper proposes a systematic and unified benchmark, LRA, specifically focused on evaluating model quality under long-context scenarios. Our benchmark is a suite of tasks consisting of sequences ranging from $1K$ to $16K$ tokens, encompassing a wide range of data types and modalities such as text, natural, synthetic images, and mathematical expressions requiring similarity, structural, and visual-spatial reasoning. We systematically evaluate ten well-established long-range Transformer models (Reformers, Linformers, Linear Transformers, Sinkhorn Transformers, Performers, Synthesizers, Sparse Transformers, and Longformers) on our newly proposed benchmark suite. LRA paves the way towards better understanding this class of efficient Transformer models, facilitates more research in this direction, and presents new challenging tasks to tackle. Our benchmark code will be released at https://github.com/google-research/long-range-arena.
Introduction
Transformers <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> are ubiquitously state-of-the-art across many modalities, from language <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|> <|cite_start|> (Reference: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.) <|cite_end|> <|cite_start|> (Reference: Generating Long Sequences with Sparse Transformers: Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.) <|cite_end|> to images <|cite_start|> (Reference: ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks: We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.) <|cite_end|> to protein sequences <|cite_start|> (Reference: Biological structure and function emerge from scaling unsupervised
learning to 250 million protein sequences: Significance Learning biological properties from sequence data is a logical step toward generative and predictive artificial intelligence for biology. Here, we propose scaling a deep contextual language model with unsupervised learning to sequences spanning evolutionary diversity. We find that without prior knowledge, information emerges in the learned representations on fundamental properties of proteins such as secondary structure, contacts, and biological activity. We show the learned representations are useful across benchmarks for remote homology detection, prediction of secondary structure, long-range residue–residue contacts, and mutational effect. Unsupervised representation learning enables state-of-the-art supervised prediction of mutational effect and secondary structure and improves state-of-the-art features for long-range contact prediction. In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.) <|cite_end|>. A common weakness of Transformers is their quadratic memory complexity within the self-attention mechanism that restricts their potential application to domains requiring longer sequence lengths. To date, a dizzying number of efficient Transformer models (\textit{`xformers'}) have been proposed to tackle this problem <|cite_start|> (Reference: Generating Wikipedia by Summarizing Long Sequences: We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.) <|cite_end|> <|cite_start|> (Reference: Reformer: The Efficient Transformer: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.) <|cite_end|> <|cite_start|> (Reference: Linformer: Self-Attention with Linear Complexity: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.) <|cite_end|> <|cite_start|> (Reference: Sparse Sinkhorn Attention: We propose Sparse Sinkhorn Attention, a new efficient and sparse method for learning to attend. Our method is based on differentiable sorting of internal representations. Concretely, we introduce a meta sorting network that learns to generate latent permutations over sequences. Given sorted sequences, we are then able to compute quasi-global attention with only local windows, improving the memory efficiency of the attention module. To this end, we propose new algorithmic innovations such as Causal Sinkhorn Balancing and SortCut, a dynamic sequence truncation method for tailoring Sinkhorn Attention for encoding and/or decoding purposes. Via extensive experiments on algorithmic seq2seq sorting, language modeling, pixel-wise image generation, document classification and natural language inference, we demonstrate that our memory efficient Sinkhorn Attention method is competitive with vanilla attention and consistently outperforms recently proposed efficient Transformer models such as Sparse Transformers.) <|cite_end|> <|cite_start|> (Reference: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from $\mathcal{O}\left(N^2\right)$ to $\mathcal{O}\left(N\right)$, where $N$ is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences.) <|cite_end|>. Many of these models demonstrate comparable performance to the vanilla Transformer model while successfully reducing the memory complexity of the self-attention mechanism. An overview of this research area can be found in <|cite_start|> (Reference: Efficient Transformers: A Survey: Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of "X-former" models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored "X-former" models, providing an organized and comprehensive overview of existing work and models across multiple domains.) <|cite_end|>.
Comparing the evaluation and experimental setup of many of these papers, we can make the following observations. Firstly, there is no unifying consensus on what makes an acceptable test bed for benchmarking efficient Transformers. There is also a large diversity in the types of tasks adopted---every single model is evaluated on a different set of tasks and datasets, which makes comparison of different models as well as an assessment of their relative strengths and weaknesses difficult. Secondly, the benchmarks used for evaluation are often arbitrarily chosen, without much consideration to whether the task is suitable for evaluating long-range modeling. Thirdly, many papers tend to conflate the effectiveness of the inductive bias with the benefits of pretraining <|cite_start|> (Reference: ETC: encoding long and structured data in transformers: Transformer-based models have pushed the state of the art in many natural language processing tasks. However, one of their main limitations is the quadratic computational and memory cost of the standard attention mechanism. In this paper, we present a new family of Transformer models, which we call the Extended Transformer Construction (ETC), that allows for significant increases in input sequence length by introducing a new global-local attention mechanism between a global memory and the standard input tokens. We also show that combining global-local attention with relative position encodings allows ETC to handle structured data with ease. Empirical results on the Natural Questions data set show the promise of the approach.) <|cite_end|> <|cite_start|> (Reference: Big Bird: Transformers for Longer Sequences: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.) <|cite_end|> <|cite_start|> (Reference: Linformer: Self-Attention with Linear Complexity: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.) <|cite_end|>, which tends to obfuscate the true value of the architecture. Pretraining itself is a computationally expensive endeavour and de-coupling inductive bias research from pretraining would make xformer research more accessible.
In this paper, we propose a new benchmark, \textit{\lra} (LRA), for the purpose of benchmarking sequence models under the long-context scenario. We design a benchmark suite comprised of both synthetic probing tasks and real-world tasks and provide relative comparisons for \textbf{ten} recently proposed efficient Transformer models including Sparse Transformers <|cite_start|> (Reference: Generating Long Sequences with Sparse Transformers: Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.) <|cite_end|>, Reformer <|cite_start|> (Reference: Reformer: The Efficient Transformer: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.) <|cite_end|>, Linformer <|cite_start|> (Reference: Linformer: Self-Attention with Linear Complexity: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.) <|cite_end|>, Longformer <|cite_start|> (Reference: Longformer: The Long-Document Transformer: Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.) <|cite_end|>, Sinkhorn Transformers <|cite_start|> (Reference: Sparse Sinkhorn Attention: We propose Sparse Sinkhorn Attention, a new efficient and sparse method for learning to attend. Our method is based on differentiable sorting of internal representations. Concretely, we introduce a meta sorting network that learns to generate latent permutations over sequences. Given sorted sequences, we are then able to compute quasi-global attention with only local windows, improving the memory efficiency of the attention module. To this end, we propose new algorithmic innovations such as Causal Sinkhorn Balancing and SortCut, a dynamic sequence truncation method for tailoring Sinkhorn Attention for encoding and/or decoding purposes. Via extensive experiments on algorithmic seq2seq sorting, language modeling, pixel-wise image generation, document classification and natural language inference, we demonstrate that our memory efficient Sinkhorn Attention method is competitive with vanilla attention and consistently outperforms recently proposed efficient Transformer models such as Sparse Transformers.) <|cite_end|>, Performers <|cite_start|> (Reference: Rethinking Attention with Performers: We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.) <|cite_end|>, Synthesizers <|cite_start|> (Reference: Synthesizer: Rethinking Self-Attention in Transformer Models: The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose \textsc{Synthesizer}, a model that learns synthetic attention weights without token-token interactions. In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE/SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only $60\%$ faster but also improves perplexity by a relative $3.5\%$. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks.) <|cite_end|>, Linear Transformers <|cite_start|> (Reference: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from $\mathcal{O}\left(N^2\right)$ to $\mathcal{O}\left(N\right)$, where $N$ is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences.) <|cite_end|>, and BigBird <|cite_start|> (Reference: Big Bird: Transformers for Longer Sequences: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.) <|cite_end|>. This is the most comprehensive and extensives side-by-side evaluation of this class of models.
While the focus of this benchmark is the ability of these architectures to reason in long-context scenarios, we are also fundamentally interested in understanding the capabilities and properties of these xformer architectures when exposed to different types of data and conditions. Hence, our benchmark is purposefully designed to be capability probing, i.e, we select datasets and tasks with certain innate structure. For example, can these architectures model long sequences that are intrinsically hierarchical or that contain some form of spatial structure? In general, we are especially interested in the relative performance of these xformer models across diverse circumstances. We hope that understanding these better will inspire research on more efficient architectures in the future. While the focus of this paper is on efficient Transformer models, our benchmark is also model agnostic and can also serve as a benchmark for long-range sequence modeling.
Aside from comparing the quality of these models, we also conduct extensive efficiency and memory usage analysis of these models. We believe such a side-by-side performance benchmark will be valuable to the community, providing deeper insight on the practical efficiency of these methods. Overall, we propose a unified framework for enabling easy side-by-side comparisons of efficient Transformer models and broadly speaking, long-range sequence models in general. Our framework, which we open source, is written in JAX/FLAX\footnote{\url{https://github.com/google/flax}}.
Related Work
\subsection{Efficient Transformers}
The pervasiveness of Transformer models, along with its well-known trait of being memory intensive, has spurred on a large number of innovations on this front. Early work in this area has typically considered a fixed pattern (local window) approach <|cite_start|> (Reference: Generating Wikipedia by Summarizing Long Sequences: We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.) <|cite_end|> <|cite_start|> (Reference: Image Transformer: Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.) <|cite_end|>. More advanced models have been proposed recently, including combined patterns <|cite_start|> (Reference: Generating Long Sequences with Sparse Transformers: Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.) <|cite_end|> <|cite_start|> (Reference: Axial Attention in Multidimensional Transformers: We propose Axial Transformers, a self-attention-based autoregressive model for images and other data organized as high dimensional tensors. Existing autoregressive models either suffer from excessively large computational resource requirements for high dimensional data, or make compromises in terms of distribution expressiveness or ease of implementation in order to decrease resource requirements. Our architecture, by contrast, maintains both full expressiveness over joint distributions over data and ease of implementation with standard deep learning frameworks, while requiring reasonable memory and computation and achieving state-of-the-art results on standard generative modeling benchmarks. Our models are based on axial attention, a simple generalization of self-attention that naturally aligns with the multiple dimensions of the tensors in both the encoding and the decoding settings. Notably the proposed structure of the layers allows for the vast majority of the context to be computed in parallel during decoding without introducing any independence assumptions. This semi-parallel structure goes a long way to making decoding from even a very large Axial Transformer broadly applicable. We demonstrate state-of-the-art results for the Axial Transformer on the ImageNet-32 and ImageNet-64 image benchmarks as well as on the BAIR Robotic Pushing video benchmark. We open source the implementation of Axial Transformers.) <|cite_end|> <|cite_start|> (Reference: Longformer: The Long-Document Transformer: Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.) <|cite_end|> <|cite_start|> (Reference: Big Bird: Transformers for Longer Sequences: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.) <|cite_end|>, learned patterns <|cite_start|> (Reference: Reformer: The Efficient Transformer: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.) <|cite_end|> <|cite_start|> (Reference: Efficient Content-Based Sparse Attention with Routing Transformers: Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to $O\left(n^{1.5}d\right)$ from $O\left(n^2d\right)$ for sequence length $n$ and hidden dimension $d$. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192.) <|cite_end|>, and recent models based on kernels <|cite_start|> (Reference: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from $\mathcal{O}\left(N^2\right)$ to $\mathcal{O}\left(N\right)$, where $N$ is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences.) <|cite_end|> <|cite_start|> (Reference: Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers: Transformer models have achieved state-of-the-art results across a diverse range of domains. However, concern over the cost of training the attention mechanism to learn complex dependencies between distant inputs continues to grow. In response, solutions that exploit the structure and sparsity of the learned attention matrix have blossomed. However, real-world applications that involve long sequences, such as biological sequence analysis, may fall short of meeting these assumptions, precluding exploration of these models. To address this challenge, we present a new Transformer architecture, Performer, based on Fast Attention Via Orthogonal Random features (FAVOR). Our mechanism scales linearly rather than quadratically in the number of tokens in the sequence, is characterized by sub-quadratic space complexity and does not incorporate any sparsity pattern priors. Furthermore, it provides strong theoretical guarantees: unbiased estimation of the attention matrix and uniform convergence. It is also backwards-compatible with pre-trained regular Transformers. We demonstrate its effectiveness on the challenging task of protein sequence modeling and provide detailed theoretical analysis.) <|cite_end|> or low-rank approximations <|cite_start|> (Reference: Linformer: Self-Attention with Linear Complexity: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.) <|cite_end|>. For the sake of brevity, we refer interested readers to <|cite_start|> (Reference: Efficient Transformers: A Survey: Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of "X-former" models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored "X-former" models, providing an organized and comprehensive overview of existing work and models across multiple domains.) <|cite_end|> for a detailed survey of this line of research.
\subsection{Existing Benchmarks}
\paragraph{Generative Modeling / Language Modeling} This generative modeling task requires predicting the next character, word, or pixel and is a staple in xformer evaluations <|cite_start|> (Reference: Efficient Content-Based Sparse Attention with Routing Transformers: Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to $O\left(n^{1.5}d\right)$ from $O\left(n^2d\right)$ for sequence length $n$ and hidden dimension $d$. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192.) <|cite_end|> <|cite_start|> (Reference: Reformer: The Efficient Transformer: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.) <|cite_end|>. However, it has been debated how much long-range signal such tasks actually encode.
LSTM language models augmented with attention have been shown to rarely attend beyond seven preceding words of context and samples from LSTM language models are known to quickly devolve into generic text. On the other hand, recent models such as the Transformer-XL have been observed to be sensitive to a context of around 900 tokens and samples from large-scale models <|cite_start|> (Reference: Language
models are unsupervised multitask learners: Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.) <|cite_end|> maintain a consistent theme over much longer sequences. Even such recent models, however, can be improved by limiting the range of attention. In sum, while standard language modelling datasets contain \emph{some} long-range signal, which is required to perform long-range coreference resolution, reasoning with events, discourse understanding, etc. <|cite_start|> (Reference: Transfer Learning in Natural Language Processing: The classic supervised machine learning paradigm is based on learning in isolation, a single predictive model for a task using a single dataset. This approach requires a large number of training examples and performs best for well-defined and narrow tasks. Transfer learning refers to a set of methods that extend this approach by leveraging data from additional domains or tasks to train a model with better generalization properties. Over the last two years, the field of Natural Language Processing (NLP) has witnessed the emergence of several transfer learning methods and architectures which significantly improved upon the state-of-the-art on a wide range of NLP tasks. These improvements together with the wide availability and ease of integration of these methods are reminiscent of the factors that led to the success of pretrained word embeddings and ImageNet pretraining in computer vision, and indicate that these methods will likely become a common tool in the NLP landscape as well as an important research direction. We will present an overview of modern transfer learning methods in NLP, how models are pre-trained, what information the representations they learn capture, and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks.) <|cite_end|> it seems to be overshadowed by the much stronger signal of short-term word co-occurrences and is thus difficult to evaluate.\footnote{Datasets such as LAMBADA <|cite_start|> (Reference: Proceedings of ACL 2016: ) <|cite_end|> more explicitly test for context understanding but are still restricted to comparatively short contexts of five sentences on average.}
\paragraph{Question Answering} Another commonly used evaluation task is question answering~\citep[QA;][]{zaheer2020big}. Open-domain QA in particular typically requires the model to answer questions based on long contexts such as entire Wikipedia documents <|cite_start|> (Reference: Proceedings of ACL 2017, System Demonstrations: ) <|cite_end|> or even books <|cite_start|> (Reference: The NarrativeQA Reading Comprehension Challenge: Reading comprehension (RC)---in contrast to information retrieval---requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.) <|cite_end|>. Other datasets are explicitly designed to require multiple `hops' of reasoning. Successful approaches are often highly engineered, computationally expensive systems that require pre-training and a separate retrieval model.
\paragraph{Natural Language Understanding / GLUE tasks} Evaluation on natural language understanding (NLU) tasks is also common <|cite_start|> (Reference: Linformer: Self-Attention with Linear Complexity: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.) <|cite_end|>. Examples in most of these datasets such as MultiNLI and SST <|cite_start|> (Reference: Proceedings of the EMNLP 2013: ) <|cite_end|> consist of single sentences and less than $100$ tokens on average. <|paper_end|> | [
"<|reference_start|> Generating Long Sequences with Sparse Transformers: Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \\sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more. <|reference_end|>",
"<|reference_start|> Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from $\\mathcal{O}\\left(N^2\\right)$ to $\\mathcal{O}\\left(N\\right)$, where $N$ is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences. <|reference_end|>",
"<|reference_start|> Rethinking Attention with Performers: We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers. <|reference_end|>",
"<|reference_start|> Linformer: Self-Attention with Linear Complexity: Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \\textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient. <|reference_end|>"
] | [
3,
10,
20,
43
] | {"<|cite_1|>": "arxiv-126595", "<|multi_cite_2_1|>": "arxiv-175879", "<|multi_cite_2_2|>": "arxiv-230407", "<|multi_cite_2_3|>": "arxiv-201219", "<|multi_cite_3_2|>": "arxiv-217761", "<|cite_4|>": "ss-1186204", "<|multi_cite_5_1|>": "arxiv-146761", "<|multi_cite_5_2|>": "arxiv-243154", "<|multi_cite_5_3|>": "arxiv-270333", "<|multi_cite_5_4|>": "arxiv-250475", "<|multi_cite_5_5|>": "arxiv-275178", "<|cite_6|>": "arxiv-289883", "<|multi_cite_7_1|>": "ss-985259", "<|multi_cite_7_2|>": "arxiv-281210", "<|multi_cite_7_3|>": "arxiv-270333", "<|cite_8|>": "arxiv-201219", "<|cite_9|>": "arxiv-243154", "<|cite_10|>": "arxiv-270333", "<|cite_11|>": "arxiv-258732", "<|cite_12|>": "arxiv-250475", "<|cite_13|>": "arxiv-293026", "<|cite_14|>": "arxiv-263025", "<|cite_15|>": "arxiv-275178", "<|cite_16|>": "arxiv-281210", "<|multi_cite_17_1|>": "arxiv-146761", "<|multi_cite_17_2|>": "arxiv-148539", "<|multi_cite_18_1|>": "arxiv-201219", "<|multi_cite_18_2|>": "arxiv-241148", "<|multi_cite_18_3|>": "arxiv-258732", "<|multi_cite_18_4|>": "arxiv-281210", "<|multi_cite_19_1|>": "arxiv-243154", "<|multi_cite_19_2|>": "arxiv-253465", "<|multi_cite_20_1|>": "arxiv-275178", "<|multi_cite_20_2|>": "arxiv-269761", "<|cite_21|>": "arxiv-270333", "<|cite_22|>": "arxiv-289883", "<|multi_cite_23_1|>": "arxiv-253465", "<|multi_cite_23_2|>": "arxiv-243154", "<|cite_27|>": "ss-1237666", "<|cite_29|>": "ss-1513926", "<|cite_30|>": "ss-1315737", "<|multi_cite_31_1|>": "ss-2352212", "<|cite_32|>": "arxiv-143524", "<|cite_35|>": "arxiv-270333", "<|cite_37|>": "ss-1315738"} |
1806.08135 | <|paper_start|> Title: Hardness and algorithmic results for the approximate cover problem
Abstract: Hardness and algorithmic results for the approximate cover problem: In CPM 2017, Amir et al. introduce a problem, named \emph{approximate string cover} (\textbf{ACP}), motivated by many aplications including coding and automata theory, formal language theory, combinatorics and molecular biology. A \emph{cover} of a string $T$ is a string $C$ for which every letter of $T$ lies within some occurrence of $C$. The input of the \textbf{ACP} problem consists of a string $T$ and an integer $m$ (less than the length of $T$), and the goal is to find a string $C$ of length $m$ that covers a string $T'$ which is as close to $T$ as possible (under some predefined distance). Amir et al. study the problem for the Hamming distance. In this paper we continue the work of Amir et al. and show the following results: We show an approximation algorithm for the \textbf{ACP} with an approximation ratio of $\sqrt{OPT}$, where OPT is the size of the optimal solution. We provide an FPT algorithm with respect to the alphabet size. \item The \textbf{ACP} problem naturally extends to pseudometrics. Moreover, we show that for some family of pseudometrics, that we term \emph{homogenous additive pseudometrics}, the complexity of \textbf{ACP} remains unchanged. We partially give an answer to an open problem of Amir et al. and show that the Hamming distance over an unbounded alphabet is equivalent to an extended metric over a fixed sized alphabet.
Introduction
\paragraph*{\sc Motivation.}
Redundancy is a common trait of all natural data and was intensely studied over the years for its descriptive capabilities <|cite_start|> (Reference: Kolmogorov Complexity and its Applications: ) <|cite_end|> <|cite_start|> (Reference: Some Remarks on Almost Periodic Sequences and Languages: Almost periodicity has been considered in Formal Language Theory in connection with some topics in Symbolic Dynamics. In (P\u{a}un and Marcus, Bulletin of EATCS 53 (1994)) some problems concerning this property are raised. For instance it is asked whether there exists some almost periodic word $\alpha$ such that $Sub(\alpha)$, the set of its finite factors, is context-free non-regular. We answer negatively (even in a stronger form) this question, as well as discussing other related topics.) <|cite_end|>. Errors can occur at any point in the data manipulation process, but by the use of redundancy they may be detected and, perhaps, corrected before propagation.
Consider the transmission of a message over a radio frequency. Since we transmit over radio, we must use a digital to analog converter, that modulates our signal in amplitude and/or phase. In our example, we consider $amplitude\ shift\ keying$ (see, e.g., <|cite_start|> (Reference: Digital Communications with Emphasis on Data Modems: Theory, Analysis, Design, Simulation, Testing, and Applications: This book discusses the design, implementation and performance verification of waveforms and algorithms appropriate for digital data modulation and demodulation in modern communication systems. Using a building-block approach, the author provides an introductory to the advanced understanding of acquisition and data detection using source and executable simulation code to validate the communication system performance with respect to theory and design specifications. The author focuses on theoretical analysis, algorithm design, firmware and software designs and subsystem and system testing. This book treats system designs with a variety of channel characteristics from very low to optical frequencies. This book offers system analysis and subsystem implementation options for acquisition and data detection appropriate to the channel conditions and system specifications, and provides test methods for demonstrating system performance. This book also:) <|cite_end|>). At the other end, the signal must be converted back, but we must check for transmission errors. If the channel is not too noisy, we may round the received amplitude to the spectrum we are using. However, we must be able to at least tell when it is too noisy. Since the signal we sent is smooth and periodic, we may smooth our data and identify interference as unnatural spikes in the received input. This, however, only accounts for major interferences, and we cannot possibly do more at the physical level, since we may not assume smoothness of the sent data itself and must rely instead on the redundancy at some higher data, that is no longer agnostic to the message's form.
Periodicity is a very important phenomenon when analyzing physical data such as an analogue signal. In general, natural data is very redundant or repetitive and exhibits some key patterns or regularities <|cite_start|> (Reference: Fractals in Biology and Medicine: ) <|cite_end|> <|cite_start|> (Reference: Cyclical patterns in risk indicators based on financial market infrastructure transaction data: This paper studies cyclical patterns in risk indicators based on TARGET2 transaction data. These indicators provide information on network properties, operational aspects and links to ancillary systems. We compare the performance of two different ARIMA dummy models to the TBATS state space model. The results show that the forecasts of the ARIMA dummy models perform better than the TBATS model. We also find that there is no clear difference between the performances of the two ARIMA dummy models. The model with the fewest explanatory variables is therefore preferred.) <|cite_end|>. Periodicity itself has been thoroughly studied in various fields such as Signal Processing <|cite_start|> (Reference: Periodicity transforms: This paper presents a method of detecting periodicities in data that exploits a series of projections onto "periodic subspaces". The algorithm finds its own set of nonorthogonal basis elements (based on the data), rather than assuming a fixed predetermined basis as in the Fourier, Gabor, and wavelet transforms. A major strength of the approach is that it is linear-in-period rather than linear-in-frequency or linear-in-scale. The algorithm is derived and analyzed, and its output is compared to that of the Fourier transform in a number of examples. One application is the finding and grouping of rhythms in a musical score, another is the separation of periodic waveforms with overlapping spectra, and a third is the finding of patterns in astronomical data. Examples demonstrate both the strengths and weaknesses of the method.) <|cite_end|>, Bioinformatics <|cite_start|> (Reference: Quaternionic periodicity transform: an algebraic solution to the tandem repeat detection problem: MOTIVATION
One of the main tasks of DNA sequence analysis is identification of repetitive patterns. DNA symbol repetitions play a key role in a number of applications, including prediction of gene and exon locations, identification of diseases, reconstruction of human evolutionary history and DNA forensics.
RESULTS
A new approach towards identification of tandem repeats in DNA sequences is proposed. The approach is a refinement of previously considered method, based on the complex periodicity transform. The refinement is obtained, among others, by mapping of DNA symbols to pure quaternions. This mapping results in an enhanced, symbol-balanced sensitivity of the transform to DNA patterns, and an unambiguous threshold selection criterion. Computational efficiency of the transform is further improved, and coupling of the computation with the period value is removed, thereby facilitating parallel implementation of the algorithm. Additionally, a post-processing stage is inserted into the algorithm, enabling unambiguous display of results in a convenient graphical format. Comparison of the quaternionic periodicity transform with two well-known pattern detection techniques shows that the new approach is competitive with these two techniques in detection of exact and approximate repeats.) <|cite_end|>, Dynamical Systems <|cite_start|> (Reference: Introduction to the Modern Theory of Dynamical Systems: ) <|cite_end|> and Control Theory <|cite_start|> (Reference: Liapunov functions and stability in control theory: ) <|cite_end|>, each bringing its own insights.
However some phenomena are not periodical by nature, even if they are very redundant. Consider for instance the string $abaabaababa$: even though it is not periodic it clearly exhibits a single pattern, $aba$, and thus, we shall call it $quasi-periodic$ (see <|cite_start|> (Reference: Of Periods, Quasiperiods, Repetitions and Covers: ) <|cite_end|>). Depending on the specific perturbations this may or may not be adequate. For example, $abaabaababa$ could be a repeated $aba$ that suffers from two $a$s so close together that they fuse (or some other desynchronization), as sounds sometimes do in natural language. In fact even $abaabaabcba$ and $abaabaabaaca$ exhibit the pattern $aba$ and the nonconforming $c$ could result from some echo or corruption. Depending on the task at hand we may want to retrieve either the information ($aba$) or the peculiarities in its transmission (the non-periodicity).
\vspace*{-1cm}
\begin{figure}
\noindent\begin{minipage}{\textwidth}
\begin{minipage}[c][3cm][c]{\dimexpr0.5\textwidth-5pt\relax}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\draw[thick, black] (-1,0) -- (0,0);
\draw[thick, black] (12.3,0) -- (13.3,0);
\draw[thick, black] (0,0) sin (0.25,2) cos (0.5,0) sin (0.75,-2) cos (1,0) -- (1.1,0);
\draw[thick, black] (1.1,0) sin (1.35,1) cos (1.6,0) sin (1.85,-1) cos (2.1,0) -- (2.2,0);
\draw[thick, black] (2.2,0) sin (2.45,2) cos (2.7,0) sin (2.95,-2) cos (3.2,0) -- (3.4,0);
\draw[thick, black] (3.4,0) sin (3.65,2) cos (3.9,0) sin (4.15,-2) cos (4.4,0) -- (4.5,0);
\draw[thick, black] (4.5,0) sin (4.75,1) cos (5.0,0) sin (5.25,-1) cos (5.5,0) -- (5.6,0);
\draw[thick, black] (5.6,0) sin (5.85,2) cos (6.1,0) sin (6.35,-2) cos (6.6,0) -- (6.8,0);
\draw[thick, black] (6.8,0) sin (7.05,2) cos (7.3,0) sin (7.55,-2) cos (7.8,0) -- (7.9,0);
\draw[thick, black] (7.9,0) sin (8.15,1) cos (8.4,0) sin (8.65,-1) cos (8.9,0) -- (9.0,0);
\draw[thick, black] (9.0,0) sin (9.25,2) cos (9.5,0) sin (9.75,-2) cos (10.0,0) -- (10.1,0);
\draw[thick, dashed, black] (9.0,0) -- (9.1,0) sin (9.35,2) cos (9.6,0) sin (9.85,-2) cos (10.1,0) -- (10.2,0);
\draw[thick, black] (10.2,0) sin (10.45,1) cos (10.7,0) sin (10.95,-1) cos (11.2,0) -- (11.3,0);
\draw[thick, black] (11.3,0) sin (11.55,2) cos (11.8,0) sin (12.05,-2) cos (12.3,0);
\draw (0.5,2) node [anchor=south] {$a$};
\draw (1.6,2) node [anchor=south] {$b$};
\draw (2.7,2) node [anchor=south] {$a$};
\draw (3.9,2) node [anchor=south] {$a$};
\draw (5.0,2) node [anchor=south] {$b$};
\draw (6.1,2) node [anchor=south] {$a$};
\draw (7.3,2) node [anchor=south] {$a$};
\draw (8.4,2) node [anchor=south] {$b$};
\draw (9.5,2) node [anchor=south] {$a$};
\draw (10.7,2) node [anchor=south] {$b$};
\draw (11.8,2) node [anchor=south] {$a$};
\end{tikzpicture}}
\end{minipage}\hfill
\begin{minipage}[c][3cm][c]{\dimexpr0.5\textwidth-5pt\relax}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\draw[thick, black] (-1,0) -- (0,0);
\draw[thick, black] (12.3,0) -- (13.3,0);
\draw[thick, black] (0,0) sin (0.25,2) cos (0.5,0) sin (0.75,-2) cos (1,0) -- (1.1,0);
\draw[thick, black] (1.1,0) sin (1.35,1) cos (1.6,0) sin (1.85,-1) cos (2.1,0) -- (2.2,0);
\draw[thick, black] (2.2,0) sin (2.45,2) cos (2.7,0) sin (2.95,-2) cos (3.2,0) -- (3.4,0);
\draw[thick, black] (3.4,0) sin (3.65,2) cos (3.9,0) sin (4.15,-2) cos (4.4,0) -- (4.5,0);
\draw[thick, black] (4.5,0) sin (4.75,1) cos (5.0,0) sin (5.25,-1) cos (5.5,0) -- (5.6,0);
\draw[thick, black] (5.6,0) sin (5.85,2) cos (6.1,0) sin (6.35,-2) cos (6.6,0) -- (6.8,0);
\draw[thick, black] (6.8,0) sin (7.05,2) cos (7.3,0) sin (7.55,-2) cos (7.8,0) -- (7.9,0);
\draw[thick, black] (7.9,0) sin (8.15,1) cos (8.4,0) sin (8.65,-1) cos (8.9,0) -- (9.0,0);
\draw[thick, black] (9.0,0) sin (9.3,3.9) cos (9.55,0) sin (9.8,-3.9) cos (10.1,0) -- (10.2,0);
\draw[thick, dashed, blue] (9.0,0) sin (9.25,2) cos (9.5,0) sin (9.75,-2) cos (10.0,0) -- (10.2,0);
\draw[thick, dashed, red] (9.0,0) -- (9.1,0) sin (9.35,2) cos (9.6,0) sin (9.85,-2) cos (10.1,0) -- (10.2,0);
\draw[thick, black] (10.2,0) sin (10.45,1) cos (10.7,0) sin (10.95,-1) cos (11.2,0) -- (11.3,0);
\draw[thick, black] (11.3,0) sin (11.55,2) cos (11.8,0) sin (12.05,-2) cos (12.3,0);
\draw (0.5,2) node [anchor=south] {$a$};
\draw (1.6,2) node [anchor=south] {$b$};
\draw (2.7,2) node [anchor=south] {$a$};
\draw (3.9,2) node [anchor=south] {$a$};
\draw (5.0,2) node [anchor=south] {$b$};
\draw (6.1,2) node [anchor=south] {$a$};
\draw (7.3,2) node [anchor=south] {$a$};
\draw (8.4,2) node [anchor=south] {$b$};
\draw (9.5,4) node [anchor=south] {$c$};
\draw (10.7,2) node [anchor=south] {$b$};
\draw (11.8,2) node [anchor=south] {$a$};
\end{tikzpicture}}
\end{minipage}
\begin{minipage}[c][4em][t]{\dimexpr0.5\textwidth-5pt\relax}
\captionof{figure}{The string $aba$ sent repeatedly over a channel as an ASK signal, with a desynchronization moment}
\end{minipage}\hfill
\begin{minipage}[c][4em][t]{\dimexpr0.5\textwidth-5pt\relax}
\captionof{figure}{The string $aba$ sent repeatedly over a channel as an ASK signal, with an echo}
\end{minipage}\hfill
\end{minipage}
\end{figure}
For example, in signal processing we may confidently rely upon periodicity, since we induce it ourselves and have an environment upon which we may make some assumptions. However, when trying to decode information which was not encoded by us, we may not expect to find periodicity. Even when the information was imbued with periodicity, if the environment exerts a degrading force, a posteriori it is entirely possible that it is no longer be periodic. If however it is not too degraded, it still holds faithful to its original form and hence exhibit quasi-periodicity. Note that the incurred perturbations may be inevitable in the typical usage environment, especially for industrial uses <|cite_start|> (Reference: Digital pulse processing in high resolution, high throughput, gamma-ray spectroscopy: A new method for processing signals produced by high resolution, large volume semiconductor detectors is described. These detectors, to be used in the next generation of spectrometer arrays for nuclear research (i.e., EUROBALL, etc.), present a set of problems, such as resolution degradation due to charge trapping and ballistic deficit effects, poor resolution at a high count rate, long term and temperature instability, etc. To solve these problems, a new approach based on digital moving window deconvolution (MWD) is developed. >) <|cite_end|>.
\paragraph*{\sc Related work.}
Quasi-periodicity was introduced by Ehrenfeucht in 1990 (according to <|cite_start|> (Reference: Of Periods, Quasiperiods, Repetitions and Covers: ) <|cite_end|>) in a Tech Report for Purdue University, even though in was not published in Elsevier until 1993 <|cite_start|> (Reference: Efficient Detection of Quasiperiodicities in Strings: ) <|cite_end|>. Apostolico, Farach and Iliopoulos were the first to consider quasi-periodicity in computer science <|cite_start|> (Reference: Optimal Superprimitivity Testing for Strings: ) <|cite_end|>. They define the quasi-period of a string to be the length of its shortest cover and present a linear (time and space) algorithm for computing it <|cite_start|> (Reference: Optimal Superprimitivity Testing for Strings: ) <|cite_end|>. This notion attracted the attention of numerous researchers <|cite_start|> (Reference: An On-Line String Superprimitivity Test: ) <|cite_end|> <|cite_start|> (Reference: Testing String Superprimitivity in Parallel: ) <|cite_end|> <|cite_start|> (Reference: Computing the Cover Array in Linear Time
: ) <|cite_end|> <|cite_start|> (Reference: An Optimal Algorithm to Compute all the Covers of a String: ) <|cite_end|> <|cite_start|> (Reference: A Correction to "An Optimal Algorithm to Compute all the Covers of a String": ) <|cite_end|>. The following surveys summarize the first decade of results: <|cite_start|> (Reference: Structures in Logic and Computer Science: ) <|cite_end|> <|cite_start|> (Reference: Fast Algorithm for Partial Covers in Words: A factor $u$ of a word $w$ is a cover of $w$ if every position in $w$ lies within some occurrence of $u$ in $w$. A word $w$ covered by $u$ thus generalizes the idea of a repetition, that is, a word composed of exact concatenations of $u$. In this article we introduce a new notion of $\alpha$-partial cover, which can be viewed as a relaxed variant of cover, that is, a factor covering at least $\alpha$ positions in $w$. We develop a data structure of $O(n)$ size (where $n=|w|$) that can be constructed in $O(n\log n)$ time which we apply to compute all shortest $\alpha$-partial covers for a given $\alpha$. We also employ it for an $O(n\log n)$-time algorithm computing a shortest $\alpha$-partial cover for each $\alpha=1,2,\ldots,n$.) <|cite_end|> <|cite_start|> (Reference: Finding Approximate Repetitions under Hamming Distance: ) <|cite_end|>.
However, quasi-periodicity takes many forms, depending on the type of patterns we want to recover. Further work has been concerned with different variants such as seeds <|cite_start|> (Reference: Computing the lambda-Seeds of a String: ) <|cite_end|>, the maximum quasi-periodic substring <|cite_start|> (Reference: Finding Maximal Quasiperiodicities in Strings: ) <|cite_end|>, k-covers <|cite_start|> (Reference: The complexity of the minimum k-cover problem: The k-coversproblem (kCP asks us to compute a minimum cardinality set of strings given length k>1 that covers a given string. It was shown in a recent paper, by reduction to 3 -SAT, that the k-covers problem is NP-complete. In this paper we introduce a new problem, that we call the Relaxed Vertex Cover Problem (RVCP), which we show is a special case of Set Cover (SCP). We show further the kCP is equivalent to RVCP restricted to certain classes GXk of graphs that represent all strings x. We discuss approximate solutions of kCP and we state a number of conjectures and open problems related to kCP and GXk.) <|cite_end|>, $\lambda$-covers <|cite_start|> (Reference: Computing the lambda-Seeds of a String: ) <|cite_end|>, enhanced covers <|cite_start|> (Reference: Enhanced string covering: ) <|cite_end|>, partial covers <|cite_start|> (Reference: Fast Algorithm for Partial Covers in Words: A factor $u$ of a word $w$ is a cover of $w$ if every position in $w$ lies within some occurrence of $u$ in $w$. A word $w$ covered by $u$ thus generalizes the idea of a repetition, that is, a word composed of exact concatenations of $u$. In this article we introduce a new notion of $\alpha$-partial cover, which can be viewed as a relaxed variant of cover, that is, a factor covering at least $\alpha$ positions in $w$. We develop a data structure of $O(n)$ size (where $n=|w|$) that can be constructed in $O(n\log n)$ time which we apply to compute all shortest $\alpha$-partial covers for a given $\alpha$. We also employ it for an $O(n\log n)$-time algorithm computing a shortest $\alpha$-partial cover for each $\alpha=1,2,\ldots,n$.) <|cite_end|>. Another variation point is the context, e.g. indeterminate strings <|cite_start|> (Reference: Conservative string covering of indeterminate strings.: We study the problem of finding local and global covers as well as seeds in conservative indeterminate strings. An indeterminate string is a sequence T = T [1]T [2] . . . T [n], where T [i] ⊆ Σ for each i, and Σ is a given alphabet of fixed size. A conservative indeterminate string, is an indeterminate string where the number of indeterminate symbols in the positions of the string, i.e the non-solid symbols, is bounded by a constant κ. We present an algorithm for finding a conservative indeterminate pattern p in an indeterminate string t. Furthermore, we present algorithms for computing conservative covers and seeds of the string t.) <|cite_end|> or weighted sequences <|cite_start|> (Reference: Computation of repetitions and regularities of biologically weighted sequences: Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.) <|cite_end|>. Some of the related problems are $\mathcal{NP}$-hard.
For some applications, such as molecular biology and computer-assisted musical analysis, we need a weaker definition of quasi-periodicity. Thus, quasi-periodicity takes the form of approximate repetitions. We may define an approximatively repeating pattern as a substring whose occurrences leave very few gaps, or that all repetitions are near an ``original'' source. Landau and Schmidt study first this form of quasi-periodicity and focus on approximate tandem repeats <|cite_start|> (Reference: An Algorithm for Approximate Tandem Repeats: ) <|cite_end|>.
In this paper we elaborate on the work of Amir et al. <|cite_start|> (Reference: Can We Recover the Cover?: ) <|cite_end|> <|cite_start|> (Reference: Approximate cover of strings: Abstract Regularities in strings arise in various areas of science, including coding and automata theory, formal language theory, combinatorics, molecular biology and many others. A common notion to describe regularity in a string T is a cover, which is a string C for which every letter of T lies within some occurrence of C. The alignment of the cover repetitions in the given text is called a tiling. In many applications finding exact repetitions is not sufficient, due to the presence of errors. In this paper, we use a new approach for handling errors in coverable phenomena and define the approximate cover problem (ACP), in which we are given a text that is a sequence of some cover repetitions with possible mismatch errors, and we seek a string that covers the text with the minimum number of errors. We first show that the ACP is NP -hard, by studying the cover-length relaxation of the ACP, in which the requested length of the approximate cover is also given with the input string. We show that this relaxation is already NP -hard. We also study another two relaxations of the ACP, which we call the partial-tiling relaxation of the ACP and the full-tiling relaxation of the ACP, in which a tiling of the requested cover is also given with the input string. A given full tiling retains all the occurrences of the cover before the errors, while in a partial tiling there can be additional occurrences of the cover that are not marked by the tiling. We show that the partial-tiling relaxation has a polynomial time complexity and give experimental evidence that the full-tiling also has polynomial time complexity. The study of these relaxations, besides shedding another light on the complexity of the ACP, also involves a deep understanding of the properties of covers, yielding some key lemmas and observations that may be helpful for a future study of regularities in the presence of errors.) <|cite_end|> who introduce \emph{approximate string covers}.
Let $w$ be a string over the alphabet $\Sigma$. We say that $w$ is periodic if it is a succession of repetitions of some proper substring $p$ of it that do not overlap i.e. $w = p^n$, for some $n\in\mathbb{N}^*$. Note that for a given $w$ there may be multiple candidates. For example, $abaabaabaaba$ can be written as both $\left(abaaba\right)^2$ or $\left(aba\right)^4$. The period of a string $w$ is the shortest candidate string $p$. For instance, the period of $abaabaabaaba$ is $aba$.
Let $w$ be a string over the alphabet $\Sigma$. We call $p$ a cover of $w$ if $p$ is shorter than $w$ and any character of $w$ belongs to some occurrence of $p$ in $w$. Equivalently, $w$ is covered by $p$ if $w$ is a succession of repetitions of $p$ that may or may not overlap. Note that a periodic string is always covered by its cover and any multiple of it and hence a string may admit multiple covers. As is the case for periods, we are only interested in the shortest cover. For instance the shortest cover of $abaabaabaaba$ is $aba$.
Determining the shortest cover of a given string $w$ is called the Minimal \textbf{S}tring \textbf{C}over \textbf{P}roblem (\textbf{SCP} for short) and is solvable in linear time <|cite_start|> (Reference: Optimal Superprimitivity Testing for Strings: ) <|cite_end|>.
Let $w$ be a string over the alphabet $\Sigma$. We call $p$ an \emph{approximate cover} of $w$, if $p$ is a cover of an ``approximation'' $w^\prime$ of $w$. The approximation error is the distance between $w$ and $w^\prime$ with respect to some metric. By abuse of notation we say that $p$ is the approximate string cover of $w$ if it is the shortest cover of the closest approximation $w^\prime$ of $w$ that admits a cover. Note that if $w$ admits a cover then its approximate string cover is its own shortest cover and the approximation is zero with regard to any metric. For example the approximate cover of $abaabaababa$ is $aba$.
Determining the approximate cover of a given string $w$ is called the \textbf{A}pproximate String \textbf{C}over \textbf{P}roblem (\textbf{ACP} for short). Amir et al. prove that \textbf{ACP} is NP-hard with respect to the Hamming distance <|cite_start|> (Reference: Approximate cover of strings: Abstract Regularities in strings arise in various areas of science, including coding and automata theory, formal language theory, combinatorics, molecular biology and many others. A common notion to describe regularity in a string T is a cover, which is a string C for which every letter of T lies within some occurrence of C. The alignment of the cover repetitions in the given text is called a tiling. In many applications finding exact repetitions is not sufficient, due to the presence of errors. In this paper, we use a new approach for handling errors in coverable phenomena and define the approximate cover problem (ACP), in which we are given a text that is a sequence of some cover repetitions with possible mismatch errors, and we seek a string that covers the text with the minimum number of errors. We first show that the ACP is NP -hard, by studying the cover-length relaxation of the ACP, in which the requested length of the approximate cover is also given with the input string. We show that this relaxation is already NP -hard. We also study another two relaxations of the ACP, which we call the partial-tiling relaxation of the ACP and the full-tiling relaxation of the ACP, in which a tiling of the requested cover is also given with the input string. A given full tiling retains all the occurrences of the cover before the errors, while in a partial tiling there can be additional occurrences of the cover that are not marked by the tiling. We show that the partial-tiling relaxation has a polynomial time complexity and give experimental evidence that the full-tiling also has polynomial time complexity. The study of these relaxations, besides shedding another light on the complexity of the ACP, also involves a deep understanding of the properties of covers, yielding some key lemmas and observations that may be helpful for a future study of regularities in the presence of errors.) <|cite_end|>.
Let $w$ be a string over the alphabet $\Sigma$. We call $p$ a seed of $w$ if $\lvert p\rvert < \lvert w\rvert$ and there exists a super-string $w^\prime$ of $w$ such that $p$ is a cover of $w^\prime$. When the error tolerance is small, with a small degree of incertitude we can find in polynomial time <|cite_start|> (Reference: Can We Recover the Cover?: ) <|cite_end|> a small set of candidates containing either the approximate cover of $w$, $p$, or a seed of $p$.
\paragraph*{\sc Our results}
In this paper we follow up on the work of Amir et al. <|cite_start|> (Reference: Approximate cover of strings: Abstract Regularities in strings arise in various areas of science, including coding and automata theory, formal language theory, combinatorics, molecular biology and many others. A common notion to describe regularity in a string T is a cover, which is a string C for which every letter of T lies within some occurrence of C. The alignment of the cover repetitions in the given text is called a tiling. In many applications finding exact repetitions is not sufficient, due to the presence of errors. In this paper, we use a new approach for handling errors in coverable phenomena and define the approximate cover problem (ACP), in which we are given a text that is a sequence of some cover repetitions with possible mismatch errors, and we seek a string that covers the text with the minimum number of errors. We first show that the ACP is NP -hard, by studying the cover-length relaxation of the ACP, in which the requested length of the approximate cover is also given with the input string. We show that this relaxation is already NP -hard. We also study another two relaxations of the ACP, which we call the partial-tiling relaxation of the ACP and the full-tiling relaxation of the ACP, in which a tiling of the requested cover is also given with the input string. A given full tiling retains all the occurrences of the cover before the errors, while in a partial tiling there can be additional occurrences of the cover that are not marked by the tiling. We show that the partial-tiling relaxation has a polynomial time complexity and give experimental evidence that the full-tiling also has polynomial time complexity. The study of these relaxations, besides shedding another light on the complexity of the ACP, also involves a deep understanding of the properties of covers, yielding some key lemmas and observations that may be helpful for a future study of regularities in the presence of errors.) <|cite_end|> <|cite_start|> (Reference: Can We Recover the Cover?: ) <|cite_end|> and investigate the \textbf{ACP}.
In Section~\ref{sec:preliminaries} we introduce notation and we define formally the \textbf{ACP} problem. In Section~\ref{sec:approx} we present a polynomial approximation algorithm for \textbf{ACP} that returns an approximate cover that matches at least $\Omega\left(\sqrt{n}\right)$ characters of the given input $w$, where $n$ is the number of characters matched by the (best) approximate cover of $w$. Then, in Section~\ref{sec:fpt} we design a fixed-parameter (probabilisitic) algorithm for solving the \textbf{ACP} for (super)addtivie metrics---the (super)aditive metrics are also defined in Section~\ref{sec:fpt}.
In Section~\ref{sec:pseudo} we show that \textbf{ACP} naturally extends to pseudometrics and that for a family of pseudometrics, which we call \emph{homogenous additive} the complexity of \textbf{ACP} remains unchanged. Finally, in Section~\ref{sec:block} we show that the Hamming distance over any unbounded alphabet is equivalent to an extended metric on any fixed size alphabet. We call this metric a \emph{block variation} of the Hamming distance. We prove that \textbf{ACP} is just as hard with regard to an additive (extended) (pseudo)metric as it is with regard to its block variation. Due to space constraints some proofs are placed in the appendix. <|paper_end|> | [
"<|reference_start|> Quaternionic periodicity transform: an algebraic solution to the tandem repeat detection problem: MOTIVATION\nOne of the main tasks of DNA sequence analysis is identification of repetitive patterns. DNA symbol repetitions play a key role in a number of applications, including prediction of gene and exon locations, identification of diseases, reconstruction of human evolutionary history and DNA forensics.\n\n\nRESULTS\nA new approach towards identification of tandem repeats in DNA sequences is proposed. The approach is a refinement of previously considered method, based on the complex periodicity transform. The refinement is obtained, among others, by mapping of DNA symbols to pure quaternions. This mapping results in an enhanced, symbol-balanced sensitivity of the transform to DNA patterns, and an unambiguous threshold selection criterion. Computational efficiency of the transform is further improved, and coupling of the computation with the period value is removed, thereby facilitating parallel implementation of the algorithm. Additionally, a post-processing stage is inserted into the algorithm, enabling unambiguous display of results in a convenient graphical format. Comparison of the quaternionic periodicity transform with two well-known pattern detection techniques shows that the new approach is competitive with these two techniques in detection of exact and approximate repeats. <|reference_end|>",
"<|reference_start|> Introduction to the Modern Theory of Dynamical Systems: <|reference_end|>",
"<|reference_start|> Digital pulse processing in high resolution, high throughput, gamma-ray spectroscopy: A new method for processing signals produced by high resolution, large volume semiconductor detectors is described. These detectors, to be used in the next generation of spectrometer arrays for nuclear research (i.e., EUROBALL, etc.), present a set of problems, such as resolution degradation due to charge trapping and ballistic deficit effects, poor resolution at a high count rate, long term and temperature instability, etc. To solve these problems, a new approach based on digital moving window deconvolution (MWD) is developed. > <|reference_end|>",
"<|reference_start|> The complexity of the minimum k-cover problem: The k-coversproblem (kCP asks us to compute a minimum cardinality set of strings given length k>1 that covers a given string. It was shown in a recent paper, by reduction to 3 -SAT, that the k-covers problem is NP-complete. In this paper we introduce a new problem, that we call the Relaxed Vertex Cover Problem (RVCP), which we show is a special case of Set Cover (SCP). We show further the kCP is equivalent to RVCP restricted to certain classes GXk of graphs that represent all strings x. We discuss approximate solutions of kCP and we state a number of conjectures and open problems related to kCP and GXk. <|reference_end|>"
] | [
6,
7,
10,
25
] | {"<|multi_cite_1_1|>": "ss-1698329", "<|multi_cite_1_2|>": "ss-1965156", "<|cite_2|>": "ss-2425413", "<|multi_cite_3_1|>": "ss-2425414", "<|multi_cite_3_2|>": "ss-2425415", "<|cite_4|>": "ss-989090", "<|cite_5|>": "ss-2425416", "<|cite_6|>": "ss-1306089", "<|cite_7|>": "ss-2107420", "<|cite_8|>": "ss-1025954", "<|cite_9|>": "ss-2425417", "<|cite_10|>": "ss-1025954", "<|cite_11|>": "ss-1179538", "<|cite_12|>": "ss-1814090", "<|cite_13|>": "ss-1814090", "<|multi_cite_14_1|>": "ss-1674467", "<|multi_cite_14_2|>": "ss-1462844", "<|multi_cite_14_3|>": "ss-1179540", "<|multi_cite_14_4|>": "ss-1179539", "<|multi_cite_14_5|>": "ss-874253", "<|multi_cite_15_1|>": "ss-2425418", "<|multi_cite_15_2|>": "arxiv-54697", "<|multi_cite_15_3|>": "ss-2425419", "<|cite_16|>": "ss-1462837", "<|cite_17|>": "ss-1400662", "<|cite_18|>": "ss-2534512", "<|cite_19|>": "ss-1462837", "<|cite_20|>": "ss-2534514", "<|cite_21|>": "arxiv-54697", "<|cite_22|>": "ss-1932490", "<|cite_23|>": "ss-2469458", "<|cite_24|>": "ss-1487860", "<|multi_cite_25_1|>": "ss-1179541", "<|multi_cite_25_2|>": "ss-1179542", "<|cite_26|>": "ss-1814090", "<|cite_27|>": "ss-1179542", "<|cite_28|>": "ss-1179541", "<|multi_cite_29_1|>": "ss-1179542", "<|multi_cite_29_2|>": "ss-1179541"} |
2010.11746 | <|paper_start|> Title: Iterative Decomposition of Joint Chance Constraints in OPF
Abstract: Iterative Decomposition of Joint Chance Constraints in OPF: In chance-constrained OPF models, joint chance constraints (JCCs) offer a stronger guarantee on security compared to single chance constraints (SCCs). Using Boole's inequality or its improved versions to decompose JCCs into SCCs is popular, yet the conservativeness introduced is still significant. In this letter, a non-parametric iterative framework is proposed to achieve the decomposition of JCCs with negligible conservativeness. An adaptive risk allocation strategy is also proposed and embedded in the framework. Results on an IEEE test case show that the conservativeness using the framework is nearly eliminated, thereby reducing the generation cost considerably.
Introduction
\IEEEPARstart{A}{}wide variety of chance-constrained OPF (CC-OPF) models that account for single chance constraints (SCCs) have been developed <|cite_start|> (Reference: Chance-constrained economic dispatch with non-Gaussian correlated wind power uncertainty: Extending traditional deterministic economic dispatch to incorporate significant stochastic wind power is an important but challenging task in today's power system decision making. In this paper, this issue is formulated as a chance-constrained economic dispatch (CCED) problem. Usually, in the presence of non-Gaussian correlated random variables, both the objective function and constraints are difficult to handle. To address this issue, this paper provides a novel method dealing with non-Gaussian random variables. First, the Gaussian mixture model is adopted to represent the joint probability density function of power output for multiple wind farms. Then, analytical formulae are derived that can be used for fast computation of partial derivatives of the objective function and transformation of chance constraints into linear ones. Thereafter, the CCED can be solved as a deterministic linear convex optimization with a global optimal solution. The effectiveness and efficiency of the proposed methodology are validated via a case study with a modified IEEE 39-bus system.) <|cite_end|>, yet SCCs ignore the simultaneous violation situations in the system <|cite_start|> (Reference: Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow: ) <|cite_end|>. Instead, joint chance constraints (JCCs) offer a stronger guarantee on the overall system security <|cite_start|> (Reference: {DC Optimal Power Flow with Joint Chance Constraints: Managing uncertainty and variability in power injections has become a major concern for power system operators due to increasing levels of fluctuating renewable energy connected to the grid. This work addresses this uncertainty via a joint chance-constrained formulation of the DC optimal power flow (OPF) problem, which satisfies all the constraints jointly with a pre-determined probability. The few existing approaches for solving joint chance-constrained OPF problems are typically either computationally intractable for large-scale problems or give overly conservative solutions that satisfy the constraints far more often than required, resulting in excessively costly operation. This paper proposes an algorithm for solving joint chance-constrained DC OPF problems by adopting an S$\ell _1$QP-type trust-region algorithm. This algorithm uses a sample-based approach that avoids making strong assumptions on the distribution of the uncertainties, scales favorably to large problems, and can be tuned to obtain less conservative results. We illustrate the performance of our method using several IEEE test cases. The results demonstrate the proposed algorithm's advantages in computational times and limited conservativeness of the solutions relative to other joint chance-constrained DC OPF algorithms.) <|cite_end|>, as all of the constraints are considered concurrently <|cite_start|> (Reference: Chance-constrained programming approach to stochastic congestion management considering system uncertainties: Considering system uncertainties in developing power system algorithms such as congestion management (CM) are a vital issue in power system analysis and studies. This study proposes a new model for network CM based on chance-constrained programming (CCP), accounting for the power system uncertainties. In the proposed approach, transmission constraints are taken into account by stochastic rather than deterministic models. The proposed approach considers network uncertainties with a specific level of probability in the optimisation process. Then, single and joint chance-constrained models are implemented on the stochastic CM. Finally, an analytical approach is used to derive the new model of the stochastic CM. In both models, the stochastic optimisation problem is transformed into an equivalent easy-to-solve deterministic problem. Effectiveness of the proposed approach is evaluated by applying the method to the IEEE 30-bus test system. The results show that the proposed CCP model outperforms the existing models as the analytical solving approach applies fewer approximations and moreover, may have less complexity and computational burden in some special situations.) <|cite_end|>.
Dealing with JCCs is generally more challenging than SCCs, because the JCC is composed of both the marginal and joint violating probabilities of SCCs. Adopting Boole's inequality to separate the JCC into SCCs is a popular approach <|cite_start|> (Reference: Sequential convex approximations to joint chance constrained programs: A Monte Carlo approach: When there is parameter uncertainty in the constraints of a convex optimization problem, it is natural to formulate the problem as a joint chance constrained program (JCCP), which requires that all constraints be satisfied simultaneously with a given large probability. In this paper, we propose to solve the JCCP by a sequence of convex approximations. We show that the solutions of the sequence of approximations converge to a Karush-Kuhn-Tucker (KKT) point of the JCCP under a certain asymptotic regime. Furthermore, we propose to use a gradient-based Monte Carlo method to solve the sequence of convex approximations.) <|cite_end|>. Nevertheless, Boole's inequality brings obvious conservativeness no matter whether the uniform <|cite_start|> (Reference: Chance-Constrained Model Predictive Control for Drinking Water Networks: ) <|cite_end|> or optimal risk allocation is used for each SCC, as the joint violating probabilities of SCCs is neglected in both cases. To reduce conservativeness, an improved version of Boole's inequality has been derived through estimating the joint violating probability of all SCCs in <|cite_start|> (Reference: Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow: ) <|cite_end|>. Hereafter, an improving bound method is further developed to approximate the equivalent decomposition of JCCs <|cite_start|> (Reference: Joint Chance Constraints in AC Optimal Power Flow: Improving Bounds through Learning: This paper considers distribution systems with a high penetration of distributed, renewable generation and addresses the problem of incorporating the associated uncertainty into the optimal operation of these networks. Joint chance constraints, which satisfy multiple constraints simultaneously with a prescribed probability, are one way to incorporate uncertainty across sets of constraints, leading to a chance-constrained optimal power flow problem. Departing from the computationally heavy scenario-based approaches or approximations that transform the joint constraint into conservative deterministic constraints; this paper develops a scalable, data-driven approach which learns operational trends in a power network, eliminates zero-probability events (e.g., inactive constraints), and accurately and efficiently approximates bounds on the joint chance constraint iteratively. In particular, the proposed framework improves upon the classic methods based on the union bound (or Boole’s inequality) by generating a much less conservative set of single chance constraints that also guarantees the satisfaction of the original joint constraint. The proposed framework is evaluated numerically using the IEEE 37-node test feeder, focusing on the problem of voltage regulation in distribution grids.) <|cite_end|>. Briefly, this method first identifies binding constraints, then estimates the joint violating probability for any desired combination of binding SCCs and finally decomposes the JCC into binding SCCs using the probability estimation. Consequently, the OPF solution is obtained. This method does bring a fresh perspective regarding transforming the JCC into SCCs. However, it may lack thorough considerations in two aspects. Namely, 1) this method ignores the interdependence among the constraint classification, probability estimation, and OPF solution, meaning that the solution may be incompatible with the classification and estimation, thus failing to satisfy the original JCC; 2) it adopts uniform risk levels for all SCCs, which leads to obvious conservative results.
This letter also focuses on solving the joint CC-OPF through decomposing JCCs into SCCs. The key contribution is proposing a non-parametric iterative framework to realize the decomposition with negligible conservativeness, thereby achieving a less costly solution. The implementation of the framework is simple and straightforward as it does not need tuning parameters or using additional algorithms to search for appropriate parameters. Compared to the related state-of-the-art methods <|cite_start|> (Reference: Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow: ) <|cite_end|> <|cite_start|> (Reference: Joint Chance Constraints in AC Optimal Power Flow: Improving Bounds through Learning: This paper considers distribution systems with a high penetration of distributed, renewable generation and addresses the problem of incorporating the associated uncertainty into the optimal operation of these networks. Joint chance constraints, which satisfy multiple constraints simultaneously with a prescribed probability, are one way to incorporate uncertainty across sets of constraints, leading to a chance-constrained optimal power flow problem. Departing from the computationally heavy scenario-based approaches or approximations that transform the joint constraint into conservative deterministic constraints; this paper develops a scalable, data-driven approach which learns operational trends in a power network, eliminates zero-probability events (e.g., inactive constraints), and accurately and efficiently approximates bounds on the joint chance constraint iteratively. In particular, the proposed framework improves upon the classic methods based on the union bound (or Boole’s inequality) by generating a much less conservative set of single chance constraints that also guarantees the satisfaction of the original joint constraint. The proposed framework is evaluated numerically using the IEEE 37-node test feeder, focusing on the problem of voltage regulation in distribution grids.) <|cite_end|>, the innovations of the framework are twofold. First, an iterative structure is developed to gradually reach a fixed point where the constraint classification, probability estimation, and OPF solution match each other. In the end, both the classification and estimation are accurate given the current, stable optimal solution. Second, an adaptive risk allocation strategy is proposed for further relaxation. Accordingly, the overall conservativeness is almost removed.
In the following, Section II proposes the non-parametric iterative framework. Section III performs case studies. Section IV concludes this letter and gives an outlook on future work.
\vspace{-0.3cm} <|paper_end|> | [
"<|reference_start|> {DC Optimal Power Flow with Joint Chance Constraints: Managing uncertainty and variability in power injections has become a major concern for power system operators due to increasing levels of fluctuating renewable energy connected to the grid. This work addresses this uncertainty via a joint chance-constrained formulation of the DC optimal power flow (OPF) problem, which satisfies all the constraints jointly with a pre-determined probability. The few existing approaches for solving joint chance-constrained OPF problems are typically either computationally intractable for large-scale problems or give overly conservative solutions that satisfy the constraints far more often than required, resulting in excessively costly operation. This paper proposes an algorithm for solving joint chance-constrained DC OPF problems by adopting an S$\\ell _1$QP-type trust-region algorithm. This algorithm uses a sample-based approach that avoids making strong assumptions on the distribution of the uncertainties, scales favorably to large problems, and can be tuned to obtain less conservative results. We illustrate the performance of our method using several IEEE test cases. The results demonstrate the proposed algorithm's advantages in computational times and limited conservativeness of the solutions relative to other joint chance-constrained DC OPF algorithms. <|reference_end|>",
"<|reference_start|> Chance-Constrained Model Predictive Control for Drinking Water Networks: <|reference_end|>",
"<|reference_start|> Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow: <|reference_end|>",
"<|reference_start|> Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow: <|reference_end|>"
] | [
2,
5,
6,
8
] | {"<|cite_1|>": "ss-1642452", "<|cite_2|>": "ss-1666658", "<|cite_3|>": "ss-1527989", "<|cite_4|>": "ss-1963158", "<|cite_5|>": "ss-1521289", "<|cite_6|>": "ss-1963159", "<|cite_8|>": "ss-1666658", "<|cite_9|>": "ss-853953", "<|multi_cite_10_1|>": "ss-1666658", "<|multi_cite_10_2|>": "ss-853953"} |
1702.01238 | <|paper_start|> Title: Large-scale Image Geo-Localization Using Dominant Sets
Abstract: Large-scale Image Geo-Localization Using Dominant Sets: This paper presents a new approach for the challenging problem of geo-locating an image using image matching in a structured database of city-wide reference images with known GPS coordinates. We cast the geo-localization as a clustering problem on local image features. Akin to existing approaches on the problem, our framework builds on low-level features which allow partial matching between images. For each local feature in the query image, we find its approximate nearest neighbors in the reference set. Next, we cluster the features from reference images using Dominant Set clustering, which affords several advantages over existing approaches. First, it permits variable number of nodes in the cluster which we use to dynamically select the number of nearest neighbors (typically coming from multiple reference images) for each query feature based on its discrimination value. Second, as we also quantify in our experiments, this approach is several orders of magnitude faster than existing approaches. Thus, we obtain multiple clusters (different local maximizers) and obtain a robust final solution to the problem using multiple weak solutions through constrained Dominant Set clustering on global image features, where we enforce the constraint that the query image must be included in the cluster. This second level of clustering also bypasses heuristic approaches to voting and selecting the reference image that matches to the query. We evaluated the proposed framework on an existing dataset of 102k street view images as well as a new dataset of 300k images, and show that it outperforms the state-of-the-art by 20% and 7%, respectively, on the two datasets.
Introduction
\IEEEPARstart{I}{mage} geo-localization, the problem of determining the location of an image using just the visual information, is remarkably difficult. Nonetheless, images often contain useful visual and contextual informative cues which allow us to determine the location of an image with variable confidence. The foremost of these cues are landmarks, architectural details, building textures and colors, in addition to road markings and surrounding vegetation.
Recently, the geo-localization through image-matching approach was proposed in <|cite_start|> (Reference: Accurate Image Localization Based on Google Maps Street View: ) <|cite_end|>. In <|cite_start|> (Reference: Accurate Image Localization Based on Google Maps Street View: ) <|cite_end|>, the authors find the first nearest neighbor (NN) for each local feature in the query image, prune outliers and use a heuristic voting scheme for selecting the matched reference image. The follow-up work relaxes the restriction of using only the first NN and proposed Generalized Minimum Clique Problem (GMCP) formulation for solving this problem. However, GMCP formulation can only handle a fixed number of nearest neighbors for each query feature. The authors used 5 NN, and found that increasing the number of NN drops the performance. Additionally, the GMCP formulation selects exactly one NN per query feature. This makes the optimization sensitive to outliers, since it is possible that none of the 5 NN is correct. Once the best NN is selected for each query feature, a very simple voting scheme is used to select the best match. Effectively, each query feature votes for a single reference image, from which the NN was selected for that particular query feature. This often results in identical number of votes for several images from the reference set. Then, both <|cite_start|> (Reference: Accurate Image Localization Based on Google Maps Street View: ) <|cite_end|> proceed with randomly selecting one reference image as the correct match to infer GPS location of the query image. Furthermore, the GMCP is a binary-variable NP-hard problem, and due to the high computational cost, only a single local minima solution is computed in.
In this paper, we propose an approach to image geo-localization by robustly finding a matching reference image to a given query image. This is done by finding correspondences between local features of the query and reference images. We first introduce automatic NN selection into our framework, by exploiting the discriminative power of each NN feature and employing different number of NN for each query feature. That is, if the distance between query and reference NNs is similar, then we use several NNs since they are ambiguous, and the optimization is afforded with more choices to select the correct match. On the other hand, if a query feature has very few low-distance reference NNs, then we use fewer NNs to save the computation cost. Thus, for some cases we use fewer NNs, while for others we use more requiring on the average approximately the same amount of computation power, but improving the performance, nonetheless. This also bypasses the manual tuning of the number of NNs to be considered, which can vary between datasets and is not straightforward.
Our approach to image geo-localization is based on \textit{Dominant Set clustering} (DSC) - a well-known generalization of maximal clique problem to edge-weighted graphs- where the goal is to extract the most compact and coherent set. It's intriguing connections to evolutionary game theory allow us to use efficient game dynamics, such as replicator dynamics and infection-immunization dynamics (InImDyn). InImDyn has been shown to have a linear time/space complexity for solving standard quadratic programs (StQPs), programs which deal with finding the extrema of a quadratic polynomial over the standard simplex <|cite_start|> (Reference: Infection and immunization: A new class of evolutionary game dynamics: ) <|cite_end|> <|cite_start|> (Reference: Graph-based quadratic optimization: A fast evolutionary approach: ) <|cite_end|>. The proposed approach is on average 200 times faster and yields an improvement of 20\% in the accuracy of geo-localization compared to <|cite_start|> (Reference: Accurate Image Localization Based on Google Maps Street View: ) <|cite_end|>. This is made possible, in addition to the dynamics, through the flexibility inherent in DSC, which unlike the GMCP formulation avoids any hard constraints on memberships. This naturally handles outliers, since their membership score is lower compared to inliers present in the cluster. Furthermore, our solution uses a linear relaxation to the binary variables, which in the absence of hard constraints is solved through an iterative algorithm resulting in massive speed up.
Since the dynamics and linear relaxation of binary variables allow our method to be extremely fast, we run it multiple times to obtain several local maxima as solutions. Next, we use a query-based variation of DSC to combine those solutions to obtain a final robust solution. The query-based DSC uses the soft-constraint that the query, or a group of queries, must always become part of the cluster, thus ensuring their membership in the solution. We use a fusion of several global features to compute the cost between query and reference images selected from the previous step. The members of the cluster from the reference set are used to find the geo-location of the query image. Note that, the GPS location of matching reference image is also used as a cost in addition to visual features to ensure both visual similarity and geographical proximity.
GPS tagged reference image databases collected from user uploaded images on Flickr have been typically used for the geo-localization task. The query images in our experiments have been collected from Flickr, however, the reference images were collected from Google Street View. The data collected through Flickr and Google Street View differ in several important aspects: the images downloaded from Flickr are often redundant and repetitive, where images of a particular building, landmark or street are captured multiple times by different users. Typically, popular or tourist spots have relatively more images in testing and reference sets compared to less interesting parts of the urban environment. An important constraint during evaluation is that the distribution of testing images should be similar to that of reference images. On the contrary, Google Street View reference data used in this paper contains only a single sample of each location of the city. However, Street View does provide spherical $360^{\circ}$ panoramic views, , approximately 12 meters apart, of most streets and roads. Thus, the images are uniformly distributed over different locations, independent of their popularity. The comprehensiveness of the data ensures that a correct match exists; nonetheless, at the same time, the sparsity or uniform distribution of the data makes geo-localization difficult, since every location is captured in only few of the reference images. The difficulty is compounded by the distorted, low-quality nature of the images as well.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We present a robust and computationally efficient approach for the problem of large-scale image geo-localization by locating images in a structured database of city-wide reference images with known GPS coordinates.
\item We formulate geo-localization problem in terms of a more generalized form of dominant sets framework which incorporates weights from the nodes in addition to edges.
\item We take a two-step approach to solve the problem. The first step uses local features to find putative set of reference images (and is therefore faster), whereas the second step uses global features and a constrained variation of dominant sets to refine results from the first step, thereby, significantly boosting the geo-localization performance.
\item We have collected new and more challenging high resolution reference dataset (\textit{\textbf{WorldCities}} dataset) of 300K Google street view images.
\end{itemize}
The rest of the paper is structured as follows. We present literature relevant to our problem in Sec. \ref{secRelatedWork}, followed by technical details of the proposed approach in Sec. \ref{secFramework}, while constrained dominant set based post processing step is discussed in Sec. \ref{post-processing}. This is followed by dataset description in section \ref{Dataset_discription}. Finally, we provide results of our extensive evaluation in Sec. \ref{secExperiments} and conclude in Sec. \ref{secConclusion}.
Related Work
\label{secRelatedWork}
The computer vision literature on the problem of geo-localization can be divided into three categories depending on the scale of the datasets used: landmarks or buildings <|cite_start|> (Reference: Retrieving landmark and non-landmark images from community photo collections: State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.) <|cite_end|> <|cite_start|> (Reference: {City-scale Landmark Identification on Mobile Devices: With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data — facade-aligned and viewpoint-aligned — and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.) <|cite_end|> <|cite_start|> (Reference: World-Scale Mining of Objects and Events from Community Photo Collections: In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.) <|cite_end|> <|cite_start|> (Reference: Tour the world: building a web-scale landmark recognition engine: Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) ~20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.) <|cite_end|>, city-scale including streetview data <|cite_start|> (Reference: Predicting good features for image geo-localization using per-bundle vlad: We address the problem of recognizing a place depicted in a query image by using a large database of geo-tagged images at a city-scale. In particular, we discover features that are useful for recognizing a place in a data-driven manner, and use this knowledge to predict useful features in a query image prior to the geo-localization process. This allows us to achieve better performance while reducing the number of features. Also, for both learning to predict features and retrieving geo-tagged images from the database, we propose per-bundle vector of locally aggregated descriptors (PBVLAD), where each maximally stable region is described by a vector of locally aggregated descriptors (VLAD) on multiple scale-invariant features detected within the region. Experimental results show the proposed approach achieves a significant improvement over other baseline methods.) <|cite_end|>, and worldwide <|cite_start|> (Reference: IM2GPS: Estimating Geographic Information from a Single Image: Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification.) <|cite_end|> <|cite_start|> (Reference: Large-Scale Image Geolocalization: ) <|cite_end|> <|cite_start|> (Reference: PlaNet - Photo Geolocation with Convolutional Neural Networks: ) <|cite_end|>. Landmark recognition is typically formulated as an image retrieval problem <|cite_start|> (Reference: Retrieving landmark and non-landmark images from community photo collections: State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.) <|cite_end|> <|cite_start|> (Reference: World-Scale Mining of Objects and Events from Community Photo Collections: In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.) <|cite_end|> <|cite_start|> (Reference: Tour the world: building a web-scale landmark recognition engine: Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) ~20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.) <|cite_end|> <|cite_start|> (Reference: I know what you did last summer: Object-level auto-annotation of holiday snaps: The state-of-the art in visual object retrieval from large databases allows to search millions of images on the object level. Recently, complementary works have proposed systems to crawl large object databases from community photo collections on the Internet. We combine these two lines of work to a large-scale system for auto-annotation of holiday snaps. The resulting method allows for automatic labeling objects such as landmark buildings, scenes, pieces of art etc. at the object level in a fully automatic manner. The labeling is multi-modal and consists of textual tags, geographic location, and related content on the Internet. Furthermore, the efficiency of the retrieval process is optimized by creating more compact and precise indices for visual vocabularies using background information obtained in the crawling stage of the system. We demonstrate the scalability and precision of the proposed method by conducting experiments on millions of images downloaded from community photo collections on the Internet.) <|cite_end|> <|cite_start|> (Reference: From Images to Scenes: Compressing an Image Cluster into a Single Scene Model for Place Recognition: The recognition of a place depicted in an image typically adopts methods from image retrieval in large-scale databases. First, a query image is described as a “bag-of-features” and compared to every image in the database. Second, the most similar images are passed to a geometric verification stage. However, this is an inefficient approach when considering that some database images may be almost identical, and many image features may not repeatedly occur. We address this issue by clustering similar database images to represent distinct scenes, and tracking local features that are consistently detected to form a set of real-world landmarks. Query images are then matched to landmarks rather than features, and a probabilistic model of landmark properties is learned from the cluster to appropriately verify or reject putative feature matches. We present novelties in both a bag-of-features retrieval and geometric verification stage based on this concept. Results on a database of 200K images of popular tourist destinations show improvements in both recognition performance and efficiency compared to traditional image retrieval methods.) <|cite_end|>. For geo-localization of landmarks and buildings, Crandall \etal <|cite_start|> (Reference: Mapping the World's Photos: We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale.) <|cite_end|> perform structural analysis in the form of spatial distribution of millions of geo-tagged photos. This is used in conjunction with visual and meta data from images to geo-locate them. The datasets for this category contain many images near prominent landmarks or images. Therefore, in many works <|cite_start|> (Reference: Retrieving landmark and non-landmark images from community photo collections: State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.) <|cite_end|> <|cite_start|> (Reference: World-Scale Mining of Objects and Events from Community Photo Collections: In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.) <|cite_end|>, similar looking images belonging to same landmarks are often grouped before geo-localization is undertaken.
For citywide geo-localization of query images, Zamir and Shah <|cite_start|> (Reference: Accurate Image Localization Based on Google Maps Street View: ) <|cite_end|> performed matching using SIFT features, where each feature votes for a reference image. The vote map is then smoothed geo-spatially and the peak in the vote map is selected as the location of the query image. They also compute 'confidence of localization' using the Kurtosis measure as it quantifies the peakiness of vote map distribution. The extension of this work in formulates the geo-localization as a clique-finding problem where the authors relax the constraint of using only one nearest neighbor per query feature. The best match for each query feature is then solved using Generalized Minimum Clique Graphs, so that a simultaneous solution is obtained for all query features in contrast to their previous work <|cite_start|> (Reference: Accurate Image Localization Based on Google Maps Street View: ) <|cite_end|>. In similar vein, Schindler \etal <|cite_start|> (Reference: City-Scale Location Recognition: We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.) <|cite_end|> used a dataset of 30,000 images corresponding to 20 kilometers of street-side data captured through a vehicle using vocabulary tree. Sattler \etal <|cite_start|> (Reference: Large-Scale Location Recognition And The Geometric Burstiness Problem: Visual location recognition is the task of determining the place depicted in a query image from a given database of geo-tagged images. Location recognition is often cast as an image retrieval problem and recent research has almost exclusively focused on improving the chance that a relevant database image is ranked high enough after retrieval. The implicit assumption is that the number of inliers found by spatial verification can be used to distinguish between a related and an unrelated database photo with high precision. In this paper, we show that this assumption does not hold for large datasets due to the appearance of geometric bursts, i.e., sets of visual elements appearing in similar geometric configurations in unrelated database photos. We propose algorithms for detecting and handling geometric bursts. Although conceptually simple, using the proposed weighting schemes dramatically improves the recall that can be achieved when high precision is required compared to the standard re-ranking based on the inlier count. Our approach is easy to implement and can easily be integrated into existing location recognition systems.) <|cite_end|> investigated ways to explicitly handle geometric bursts by analyzing the geometric relations between the different database images retrieved by a query. Arandjelovic´ \etal <|cite_start|> (Reference: NetVLAD: CNN architecture for weakly supervised place recognition: We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.) <|cite_end|> developed a convolutional neural network architecture for place recognition that aggregates mid-level (conv5) convolutional features extracted from the entire image into a compact single vector representation amenable to efficient indexing.
Torii \etal <|cite_start|> (Reference: Visual Place Recognition with Repetitive Structures: Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.) <|cite_end|> exploited repetitive structure for visual place recognition, by robustly detecting repeated image structures and a simple modification of weights in the bag-of-visual-word model. Zeisl \etal <|cite_start|> (Reference: Camera pose voting for large-scale image-based localization: Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches - whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.) <|cite_end|> proposed a voting-based pose estimation strategy that exhibits linear complexity in the number of matches and thus facilitates to consider much more matches.
For geo-localization at the global scale, Hays and Efros <|cite_start|> (Reference: IM2GPS: Estimating Geographic Information from a Single Image: Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification.) <|cite_end|> were the first to extract coarse geographical location of query images using Flickr collected across the world. Recently, Weyand \etal <|cite_start|> (Reference: PlaNet - Photo Geolocation with Convolutional Neural Networks: ) <|cite_end|> pose the problem of geo-locating images in terms of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geo-tagged images. In the regions where the coverage of photos is dense, structure-from-motion reconstruction is used for matching query images <|cite_start|> (Reference: {Building Rome in a day: We present a system that can match and reconstruct 3D scenes from extremely large collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo sharing sites. Our system uses a collection of novel parallel distributed matching and reconstruction algorithms, designed to maximize parallelism at each stage in the pipeline and minimize serialization bottlenecks. It is designed to scale gracefully with both the size of the problem and the amount of available computation. We have experimented with a variety of alternative algorithms at each stage of the pipeline and report on which ones work best in a parallel computing environment. Our experimental results demonstrate that it is now possible to reconstruct cities consisting of 150K images in less than a day on a cluster with 500 compute cores.) <|cite_end|> <|cite_start|> (Reference: Worldwide Pose Estimation Using 3D Point Clouds: ) <|cite_end|> <|cite_start|> (Reference: Fast Image-based Localization using Direct 2D-to-3D Matching: Recently developed Structure from Motion (SfM) reconstruction approaches enable the creation of large scale 3D models of urban scenes. These compact scene representations can then be used for accurate image-based localization, creating the need for localization approaches that are able to efficiently handle such large amounts of data. An important bottleneck is the computation of 2D-to-3D correspondences required for pose estimation. Current stateof- the-art approaches use indirect matching techniques to accelerate this search. In this paper we demonstrate that direct 2D-to-3D matching methods have a considerable potential for improving registration performance. We derive a direct matching framework based on visual vocabulary quantization and a prioritized correspondence search. Through extensive experiments, we show that our framework efficiently handles large datasets and outperforms current state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Improving Image-Based Localization by Active Correspondence Search: ) <|cite_end|>. Since the difficulty of the problem increases as we move from landmarks to city-scale and finally to worldwide, the performance also drops.
There are many interesting variations to the geo-localization problem as well. Sequential information such as chronological order of photos was used by <|cite_start|> (Reference: Image Sequence Geolocation with Human Travel Priors: This paper presents a method for estimating geographic location for sequences of time-stamped photographs. A prior distribution over travel describes the likelihood of traveling from one location to another during a given time interval. This distribution is based on a training database of 6 million photographs from Flickr.com. An image likelihood for each location is defined by matching a test photograph against the training database. Inferring location for images in a test sequence is then performed using the Forward-Backward algorithm, and the model can be adapted to individual users as well. Using temporal constraints allows our method to geolocate images without recognizable landmarks, and images with no geographic cues whatsoever. This method achieves a substantial performance improvement over the best-available baseline, and geolocates some users' images with near-perfect accuracy.) <|cite_end|> to geo-locate photos. Similarly, there are methods to find trajectory of a moving camera by geo-locating video frames using Bayesian Smoothing <|cite_start|> (Reference: City scale geo-spatial trajectory estimation of a moving camera: This paper presents a novel method for estimating the geospatial trajectory of a moving camera with unknown intrinsic parameters, in a city-scale urban environment. The proposed method is based on a three step process that includes: 1) finding the best visual matches of individual images to a dataset of geo-referenced street view images, 2) Bayesian tracking to estimate the frame localization and its temporal evolution, and 3) a trajectory reconstruction algorithm to eliminate inconsistent estimations. As a result of matching features in query image with the features in the reference geo-taged images, in the first step, we obtain a distribution of geolocated votes of matching features which is interpreted as the likelihood of the location (latitude and longitude) given the current observation. In the second step, Bayesian tracking framework is used to estimate the temporal evolution of frame geolocalization based on the previous state probabilities and current likelihood. Finally, once a trajectory is estimated, we perform a Minimum Spanning Trees (MST) based trajectory reconstruction algorithm to eliminate trajectory loops or noisy estimations. The proposed method was tested on sixty minutes of video, which included footage downloaded from YouTube and footage captured by random users in Orlando and Pittsburgh.) <|cite_end|> or geometric constraints <|cite_start|> (Reference: Estimating geospatial trajectory of a moving camera: This paper proposes a novel method for estimating the geospatial trajectory of a moving camera. The proposed method uses a set of reference images with known GPS (global positioning system) locations to recover the trajectory of a moving camera using geometric constraints. The proposed method has three main steps. First, scale invariant features transform (SIFT) are detected and matched between the reference images and the video frames to calculate a weighted adjacency matrix (WAM) based on the number of SIFT matches. Second, using the estimated WAM, the maximum matching reference image is selected for the current video frame, which is then used to estimate the relative position (rotation and translation) of the video frame using the fundamental matrix constraint. The relative position is recovered up to a scale factor and a triangulation among the video frame and two reference images is performed to resolve the scale ambiguity. Third, an outlier rejection and trajectory smoothing (using b-spline) post processing step is employed. This is because the estimated camera locations may be noisy due to bad point correspondence or degenerate estimates of fundamental matrices. Results of recovering camera trajectory are reported for real sequences) <|cite_end|>. Chen and Grauman <|cite_start|> (Reference: Clues from the beaten path: Location estimation with bursty sequences of tourist photos: Image-based location estimation methods typically recognize every photo independently, and their resulting reliance on strong visual feature matches makes them most suited for distinctive landmark scenes. We observe that when touring a city, people tend to follow common travel patterns — for example, a stroll down Wall Street might be followed by a ferry ride, then a visit to the Statue of Liberty. We propose an approach that learns these trends directly from online image data, and then leverages them within a Hidden Markov Model to robustly estimate locations for novel sequences of tourist photos. We further devise a set-to-set matching-based likelihood that treats each “burst” of photos from the same camera as a single observation, thereby better accommodating images that may not contain particularly distinctive scenes. Our experiments with two large datasets of major tourist cities clearly demonstrate the approach's advantages over methods that recognize each photo individually, as well as a simpler HMM baseline that lacks the proposed burst-based observation model.) <|cite_end|> present Hidden Markov Model approach to match sets of images with sets in the database for location estimation. Lin \etal <|cite_start|> (Reference: Cross-{{View Image Geolocalization: The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.) <|cite_end|> use aerial imagery in conjunction with ground images for geo-localization. Others <|cite_start|> (Reference: {Learning deep representations for ground-to-aerial geolocalization: The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.) <|cite_end|> <|cite_start|> (Reference: Wide-Area Image Geolocalization with Aerial Reference Imagery: We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.) <|cite_end|> approach the problem by matching ground images against a database of aerial images. Jacob \etal <|cite_start|> (Reference: Geolocating static cameras: A key problem in widely distributed camera networks is locating the cameras. This paper considers three scenarios for camera localization: localizing a camera in an unknown environment, adding a new camera in a region with many other cameras, and localizing a camera by finding correlations with satellite imagery. We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene. We present results from a database of images from 538 cameras collected over the course of a year. We find that for cameras that remain stationary and for which we have accurate image times- tamps, we can localize most cameras to within 50 miles of the known location. In addition, we demonstrate the use of a distributed camera network in the construction a map of weather conditions.) <|cite_end|> geo-localize a webcam by correlating its video-stream with satellite weather maps over the same time period. Skyline2GPS <|cite_start|> (Reference: Skyline2GPS: Localization in urban canyons using omni-skylines: This paper investigates the problem of geo-localization in GPS challenged urban canyons using only skylines. Our proposed solution takes a sequence of upward facing omnidirectional images and coarse 3D models of cities to compute the geo-trajectory. The camera is oriented upwards to capture images of the immediate skyline, which is generally unique and serves as a fingerprint for a specific location in a city. Our goal is to estimate global position by matching skylines extracted from omni-directional images to skyline segments from coarse 3D city models. Under day-time and clear sky conditions, we propose a sky-segmentation algorithm using graph cuts for estimating the geo-location. In cases where the skyline gets affected by partial fog, night-time and occlusions from trees, we propose a shortest path algorithm that computes the location without prior sky detection. We show compelling experimental results for hundreds of images taken in New York, Boston and Tokyo under various weather and lighting conditions (daytime, foggy dawn and night-time).) <|cite_end|> uses street view data and segments the skyline in an image captured by an upward-facing camera by matching it against a 3D model of the city.
Feature discriminativity has been explored by <|cite_start|> (Reference: DisLocation: Scalable Descriptor Distinctiveness for Location Recognition: ) <|cite_end|>, who use local density of descriptor space as a measure of descriptor distinctiveness, i.e. descriptors which are in a densely populated region of the descriptor space are deemed to be less distinctive. Similarly, Bergamo \etal <|cite_start|> (Reference: Leveraging structure from motion to learn discriminative codebooks for scalable landmark classification: In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets of feature descriptors during code-book learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importance-weighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable.) <|cite_end|> leverage Structure from Motion to learn discriminative codebooks for recognition of landmarks. In contrast, Cao and Snavely <|cite_start|> (Reference: Graph-Based Discriminative Learning for Location Recognition: ) <|cite_end|> build a graph over the image database, and learn local discriminative models over the graph which are used for ranking database images according to the query. Similarly, Gronat \etal <|cite_start|> (Reference: Learning and Calibrating Per-Location Classifiers for Visual Place Recognition: ) <|cite_end|> train discriminative classifier for each landmark and calibrate them afterwards using statistical significance measures. Instead of exploiting discriminativity, some works use similarity of features to detect repetitive structures to find locations of images. For instance, Torii \etal <|cite_start|> (Reference: Visual Place Recognition with Repetitive Structures: Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.) <|cite_end|> consider a similar idea and find repetitive patterns among features to place recognition. Similarly, Hao \etal <|cite_start|> (Reference: 3d visual phrases for landmark recognition: In this paper, we study the problem of landmark recognition and propose to leverage 3D visual phrases to improve the performance. A 3D visual phrase is a triangular facet on the surface of a reconstructed 3D landmark model. In contrast to existing 2D visual phrases which are mainly based on co-occurrence statistics in 2D image planes, such 3D visual phrases explicitly characterize the spatial structure of a 3D object (landmark), and are highly robust to projective transformations due to viewpoint changes. We present an effective solution to discover, describe, and detect 3D visual phrases. The experiments on 10 landmarks have achieved promising results, which demonstrate that our approach provides a good balance between precision and recall of landmark recognition while reducing the dependence on post-verification to reject false positives.) <|cite_end|> incorporate geometry between low-level features, termed 'visual phrases', to improve the performance on landmark recognition.
Our work is situated in the middle category, where given a database of images from a city or a group of cities, we aim to find the location where a test image was taken from. Unlike landmark recognition methods, the query image may or may not contain landmarks or prominent buildings. Similarly, in contrast to methods employing reference images from around the globe, the street view data exclusively contains man-made structures and rarely natural scenes like mountains, waterfalls or beaches.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth ,trim=0cm 1.8cm 0cm 2.3cm, clip]{New_Overview.pdf}
\caption{Overview of the proposed method.}
\label{Overview}
\end{figure*} <|paper_end|> | [
"<|reference_start|> World-Scale Mining of Objects and Events from Community Photo Collections: In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date. <|reference_end|>",
"<|reference_start|> Large-Scale Location Recognition And The Geometric Burstiness Problem: Visual location recognition is the task of determining the place depicted in a query image from a given database of geo-tagged images. Location recognition is often cast as an image retrieval problem and recent research has almost exclusively focused on improving the chance that a relevant database image is ranked high enough after retrieval. The implicit assumption is that the number of inliers found by spatial verification can be used to distinguish between a related and an unrelated database photo with high precision. In this paper, we show that this assumption does not hold for large datasets due to the appearance of geometric bursts, i.e., sets of visual elements appearing in similar geometric configurations in unrelated database photos. We propose algorithms for detecting and handling geometric bursts. Although conceptually simple, using the proposed weighting schemes dramatically improves the recall that can be achieved when high precision is required compared to the standard re-ranking based on the inlier count. Our approach is easy to implement and can easily be integrated into existing location recognition systems. <|reference_end|>",
"<|reference_start|> Improving Image-Based Localization by Active Correspondence Search: <|reference_end|>",
"<|reference_start|> Geolocating static cameras: A key problem in widely distributed camera networks is locating the cameras. This paper considers three scenarios for camera localization: localizing a camera in an unknown environment, adding a new camera in a region with many other cameras, and localizing a camera by finding correlations with satellite imagery. We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene. We present results from a database of images from 538 cameras collected over the course of a year. We find that for cameras that remain stationary and for which we have accurate image times- tamps, we can localize most cameras to within 50 miles of the known location. In addition, we demonstrate the use of a distributed camera network in the construction a map of weather conditions. <|reference_end|>"
] | [
21,
25,
34,
42
] | {"<|multi_cite_1_1|>": "ss-1092531", "<|cite_2|>": "ss-1092531", "<|multi_cite_4_1|>": "ss-1092531", "<|multi_cite_6_1|>": "ss-2279157", "<|multi_cite_6_2|>": "ss-989483", "<|multi_cite_7_1|>": "ss-1092531", "<|multi_cite_8_1|>": "ss-1394748", "<|multi_cite_8_2|>": "ss-1393501", "<|multi_cite_8_3|>": "ss-1713988", "<|multi_cite_8_4|>": "ss-1167161", "<|cite_9|>": "ss-997536", "<|multi_cite_10_1|>": "ss-800561", "<|multi_cite_10_2|>": "ss-1234317", "<|multi_cite_10_3|>": "ss-1263321", "<|multi_cite_11_1|>": "ss-1394748", "<|multi_cite_11_2|>": "ss-1713988", "<|multi_cite_11_3|>": "ss-1167161", "<|multi_cite_11_4|>": "ss-864151", "<|multi_cite_11_5|>": "ss-1008449", "<|cite_12|>": "ss-800560", "<|multi_cite_13_1|>": "ss-1394748", "<|multi_cite_13_2|>": "ss-1713988", "<|cite_14|>": "ss-1092531", "<|cite_16|>": "ss-1092531", "<|cite_17|>": "ss-705601", "<|cite_18|>": "ss-1080937", "<|cite_19|>": "arxiv-87852", "<|cite_20|>": "ss-1268379", "<|cite_21|>": "ss-1513134", "<|cite_22|>": "ss-800561", "<|cite_23|>": "ss-1263321", "<|multi_cite_24_1|>": "ss-1093304", "<|multi_cite_24_2|>": "ss-1347631", "<|multi_cite_24_3|>": "ss-1062717", "<|multi_cite_24_4|>": "ss-1279605", "<|cite_25|>": "ss-1713989", "<|cite_26|>": "ss-997537", "<|cite_27|>": "ss-997538", "<|cite_28|>": "ss-997539", "<|cite_29|>": "ss-879520", "<|multi_cite_30_1|>": "ss-1039334", "<|multi_cite_30_2|>": "arxiv-85483", "<|cite_31|>": "ss-1381783", "<|cite_32|>": "ss-1142110", "<|cite_33|>": "ss-997540", "<|cite_34|>": "ss-997541", "<|cite_35|>": "ss-1374458", "<|cite_36|>": "ss-1381786", "<|cite_37|>": "ss-1268379", "<|cite_38|>": "ss-997542"} |
1002.5034 | <|paper_start|> Title: Threshold rules for online sample selection
Abstract: Threshold rules for online sample selection: We consider the following sample selection problem. We observe in an online fashion a sequence of samples, each endowed by a quality. Our goal is to either select or reject each sample, so as to maximize the aggregate quality of the subsample selected so far. There is a natural trade-off here between the rate of selection and the aggregate quality of the subsample. We show that for a number of such problems extremely simple and oblivious "threshold rules" for selection achieve optimal tradeoffs between rate of selection and aggregate quality in a probabilistic sense. In some cases we show that the same threshold rule is optimal for a large class of quality distributions and is thus oblivious in a strong sense.
Introduction
Imagine a heterogeneous sequence of samples from an array of sensors,
having different utilities reflecting their accuracy, quality, or
applicability to the task at hand. We wish to discard all but the
most relevant or useful samples. Further suppose that selection is
performed online --- every time we receive a new sample we must make
an irrevocable decision to keep it or discard it. What rules can we
use for sample selection? There is a tradeoff here: while we want to
retain only the most useful samples, we may not want to be overly
selective and discard a large fraction. So we could either fix a rate
of selection (the number of examples we want to retain as a function
of the number we see) and ask for the best quality subsample, or fix a
desirable level of quality as a function of the size of the subsample
and ask to achieve this with the fewest samples rejected.
An example of online sample selection is the following ``hiring''
process that has been studied previously. Imagine that a company
wishing to grow interviews candidates to observe their qualifications,
work ethic, compatibility with the existing workforce, etc. How should
the company make hiring decisions so as to obtain the higest quality
workforce possible? As for the sensor problem, there is no single
correct answer here. Rather a good hiring strategy depends on the rate
at which the company plans to grow---again there is a trade-off
between being overly selective and growing fast. Broder et
al. <|cite_start|> (Reference: The hiring problem and lake wobegon strategies: We introduce the hiring problem, in which a growing company continuously interviews and decides whether to hire applicants. This problem is similar in spirit but quite different from the well-studied secretary problem. Like the secretary problem, it captures fundamental aspects of decision making under uncertainty and has many possible applications. We analyze natural strategies of hiring above the current average, considering both the mean and the median averages; we call these Lake Wobegon strategies. Like the hiring problem itself, our strategies are intuitive, simple to describe, and amenable to mathematically and economically significant modifications. We demonstrate several intriguing behaviors of the two strategies. Specifically, we show dramatic differences between hiring above the mean and above the median. We also show that both strategies are intrinsically connected to the lognormal distribution, leading to only very weak concentration results, and the marked importance of the first few hires on the overall outcome.) <|cite_end|> studied this hiring problem in a simple setting
where each candidate's quality is a one-dimensional random variable
and the company wants to maximize the average or median quality of its
workforce.
In general performing such selection tasks may require complicated
rules that depend on the samples seen so far. Our main contribution is
to show that in a number of settings an extremely simple class of
rules that we call ``threshold rules'' is close to optimal on average
(within constant factors).
Specifically, suppose that each sample is endowed with a ``quality'',
which is a random variable drawn from a known distribution. We are
interested in maximizing the aggregate quality of a set of samples,
which is a numerical function of the individual qualities.
Suppose that we want to select a subset of $n$ samples out of a total
of $T$ seen. Let $Q^{\OPT}_{T,n}$ denote the maximum aggregate
quality that can be achieves by picking the best $n$ out of the $T$
samples. Our goal is to design an online selection rule that
approximates $Q^{\OPT}_{T,n}$ in expectation over the $T$ samples. We
use two measures of approximation --- the ratio of the expected
quality achieved by the offline optimum to that achieved by the online
selection rule, $E[Q^{\OPT}_{T,n}]/E[Q_{T,n}]$, and the
expectation of the ratio of the qualities of the two rules,
$E[Q^{\OPT}_{T,n}/Q_{T,n}]$. Here the expectations are taken
over the distribution from which the sample is drawn. The
approximation ratios are always at least $1$ and our goal is to show
that they are bounded from above by a constant independent of $n$. In
this case we say that the corresponding selection rule is optimal.
To put this in context, consider the setting studied by Broder et
al. <|cite_start|> (Reference: The hiring problem and lake wobegon strategies: We introduce the hiring problem, in which a growing company continuously interviews and decides whether to hire applicants. This problem is similar in spirit but quite different from the well-studied secretary problem. Like the secretary problem, it captures fundamental aspects of decision making under uncertainty and has many possible applications. We analyze natural strategies of hiring above the current average, considering both the mean and the median averages; we call these Lake Wobegon strategies. Like the hiring problem itself, our strategies are intuitive, simple to describe, and amenable to mathematically and economically significant modifications. We demonstrate several intriguing behaviors of the two strategies. Specifically, we show dramatic differences between hiring above the mean and above the median. We also show that both strategies are intrinsically connected to the lognormal distribution, leading to only very weak concentration results, and the marked importance of the first few hires on the overall outcome.) <|cite_end|>. Each sample is associated with a quality in the
range $[0,1]$, and the goal is to maximize the average quality of the
subsample we pick. Broder et al. show (implicitly) that if the quality
is distributed uniformly in $[0,1]$ a natural \emph{select above the
mean} rule is optimal to within constant factors with respect to the
optimal offline algorithm that has the same selection rate as the
rule. The same observation holds also for the
\emph{select above the median} rule. Both of these rules are adaptive
in the sense that the next selection decision depends on the samples
seen so far. In more general settings, adaptive rules of this kind can
require unbounded space to store information about samples seen
previously. For example, consider the following 2-dimensional skyline
problem: each sample is a point in a unit square; the quality of a
single point $(x,y)$ is the area of its ``shadow'' $[0,x]\times[0,y]$,
and the quality of a set of points is the area of the collective
shadows of all the points; the goal is to pick a subsample with the
largest shadow. In this case, a natural selection rule is to select a
sample if it falls out of the shadow of the previously seen
points. However implementing this rule requires remembering on average
$O(\log n)$ samples out of $n$ samples seen <|cite_start|> (Reference: On the Average Number of Maxima in a Set of Vectors and Applications: A maximal vector of a set ~s one which is not less than any other vector m all components We derive a recurrence relation for computing the average number of maxunal vectors m a set of n vectors m d-space under the assumpUon that all (nl) a relative ordermgs are equally probable. Solving the recurrence shows that the average number of maxmaa is O((ln n) a-~) for fixed d We use this result to construct an algorithm for finding all the maxima that have expected running tmae hnear m n (for sets of vectors drawn under our assumptions) We then use the result to find an upper bound on the expected number of convex hull points m a random point set) <|cite_end|>. We
therefore study non-adaptive selection rules.
We focus in particular on so-called ``threshold rules'' for
selection. A threshold rule specifies a criterion or ``threshold''
that a candidate must satisfy to get selected. Most crucially, the
threshold is determined \textit{a priori} given a desired selection
rate; it depends only on the number of samples picked so far and is
otherwise independent of the samples seen or picked. Threshold rules
are extremely simple oblivious rules and can, in particular, be
``hard-wired'' into the selection process. This suggests the following
natural questions. When are threshold rules optimal for online
selection problems? Does the answer depend on the desired rate of
selection? We answer these questions in three different settings in
this paper.
The first setting we study is a single-dimensional-quality setting
similar to Broder et al.'s model. In this setting, we study threshold
rules of the form {\em ``Pick the next sample whose quality exceeds
$f(i)$''} where $i$ is the number of samples picked so far. We show
that for a large class of functions $f$ these rules give constant
factor approximations. Interestingly, our threshold rules are optimal
in an almost distribution-independent way. In particular, every rule
$f$ in the aforementioned class is simultaneously constant-factor
optimal with respect to any ``power law'' distribution, and the
approximation factor is independent of the parameters of the
distribution. In contrast, Broder et al.'s results hold only for the
uniform distribution\footnote{While Broder et al.'s result can be
extended to any arbitrary distribution via a standard tranformation
from one space to another, the resulting selection rule becomes
distribution dependent, e.g., ``select above the mean'' is no longer
``select above the mean'' w.r.t. the other distribution upon applying
the transformation.}.
In the second setting, samples are nodes in a rooted infinite-depth
tree. Each node is said to cover all the nodes on the unique path from
the root to itself. The quality of a collection of nodes is the total
number of distinct nodes that they collectively cover. This is
different from the first setting in that the quality defines only a
partial order over the samples. Once again, we study threshold rules
of the form {\em ``Pick the next sample whose quality exceeds
$f(i)$''} and show that they are constant factor optimal.
Our third setting is a generalization of the skyline problem described
previously. Specifically, consider a domain $X$ with a probability
measure $\mu$ and a partial ordering $\prec$ over it. For an element
$x\in X$, the ``shadow'' or ``downward closure'' of $x$ is the set of
all the points that it dominates in this partial ordering, $\calD(x)
= \{y: y\prec x\}$; likewise the shadow of a subset $S\subseteq X$ is
$\calD(S) = \cup_{x\in S} \calD(x)$. Once again, as in the second
setting, we can define the coverage of a single sample to be the
measure of all the points in its shadow. However, unlike the tree
setting, here it is usually easy to obtain a constant factor
approximation to coverage---the maximum coverage achievable is $1$
(i.e. the measure of the entire universe), whereas in many cases
(e.g. for the uniform distribution over the unit square) a single
random sample can in expectation obtain constant coverage. We
therefore measure the quality of a subsample $S\subset X$ by its
``gap'',
$\Gap(S)= 1 - \mu(\calD(S))$. In this setting, rules that place a
threshold on the quality of the next sample to be selected are not
constant-factor optimal. Instead, we study threshold rules of the form
{\em ``Pick the next sample $x$ for which $\mu(\calU(x))$ is at most
$f(i)$''}, where $\calU(x) = \{y: x\prec y\}$ is the set of all
elements that dominate $x$, or the ``upward closure'' of $x$, and show
that these rules obtain constant factor approximations.
\subsection{Related work}
As mentioned earlier, our work is inspired by and extends the work of
Broder et al. <|cite_start|> (Reference: The hiring problem and lake wobegon strategies: We introduce the hiring problem, in which a growing company continuously interviews and decides whether to hire applicants. This problem is similar in spirit but quite different from the well-studied secretary problem. Like the secretary problem, it captures fundamental aspects of decision making under uncertainty and has many possible applications. We analyze natural strategies of hiring above the current average, considering both the mean and the median averages; we call these Lake Wobegon strategies. Like the hiring problem itself, our strategies are intuitive, simple to describe, and amenable to mathematically and economically significant modifications. We demonstrate several intriguing behaviors of the two strategies. Specifically, we show dramatic differences between hiring above the mean and above the median. We also show that both strategies are intrinsically connected to the lognormal distribution, leading to only very weak concentration results, and the marked importance of the first few hires on the overall outcome.) <|cite_end|>. Broder et al. consider a special case of
the one-dimensional selection problem described above. They assume
that the quality of a sample is distributed uniformly over the
interval $(0,1)$; this assumption is not without loss of generality.
They analyze two adaptive selection rules---\emph{select above the
mean}, and \emph{select above the median}---and show that both are
constant-factor optimal , although they lead to different growth
rates. These rules are adaptive in the sense that the next selection
decision depends on the quality of the samples accepted so far. Note
that the \emph{select above the median} rule requires the algorithm to
remember all of the samples accepted so far, and is therefore a
computationally intensive rule. Even the relatively simpler
\emph{select above the mean} rule requires remembering the current
mean and number so far accepted. In contrast we show
(Section~\ref{sec:one-dim}) that there exists a class of simple
non-adaptive selection strategies that also achieves optimality and
includes rules with selection rates equal to those of the ones studied
by Broder et al. These strategies make decisions based only on the
number hired so far. Furthermore we extend these results to more
general coverage problems.
Our third setting is closely related to the skyline problem that has
been studied extensively in online settings by the database community
(see, for example, <|cite_start|> (Reference: Computing all skyline probabilities for uncertain data: Skyline computation is widely used in multi-criteria decision making. As research in uncertain databases draws increasing attention, skyline queries with uncertain data have also been studied, e.g. probabilistic skylines. The previous work requires "thresholding" for its efficiency -- the efficiency relies on the assumption that points with skyline probabilities below a certain threshold can be ignored. But there are situations where "thresholding" is not desirable -- low probability events cannot be ignored when their consequences are significant. In such cases it is necessary to compute skyline probabilities of all data items. We provide the first algorithm for this problem whose worst-case time complexity is sub-quadratic. The techniques we use are interesting in their own right, as they rely on a space partitioning technique combined with using the existing dominance counting algorithm. The effectiveness of our algorithm is experimentally verified.) <|cite_end|> and references
therein). Kung, et al. <|cite_start|> (Reference: On Finding the Maxima of a Set of Vectors: H. T. KUNG Carnegze-Mellon Un~verszty, P2ttsburgh, Pennsylvanza F. LUCCIO Unwerszht d~ P~sa, P~sa, Italy F. P. PREPARATA University of Ilhno~s, Urbana, Illinois ASSTRACT. Let U1 , U2, . . . , Ud be totally ordered sets and let V be a set of n d-dimensional vectors In U~ X Us. . X Ud . A partial ordering is defined on V in a natural way The problem of finding all maximal elements of V with respect to the partial ordering ~s considered The computational com- plexity of the problem is defined to be the number of required comparisons of two components and is denoted by Cd(n). It is tnwal that C~(n) = n - 1 and C,~(n) _ flog2 n!l for d _> 2) <|cite_end|> gave an offline
divide-and-conquer algorithm that finds the skyline of a given set of
vectors in $d$-dimensional space. Their algorithm uses $O(n \log_2 n)$
comparisons of vector components when $d = 2,3$ and $O(n (\log_2
n)^{d-2})$ when $d \geq 4$. The implementation of a \emph{Skyline}
query for database systems was recently introduced by <|cite_start|> (Reference: The skyline operator: We propose to extend database systems by a Skyline operation. This operation filters out a set of interesting points from a potentially large set of data points. A point is interesting if it is not dominated by any other point. For example, a hotel might be interesting for somebody traveling to Nassau if no other hotel is both cheaper and closer to the beach. We show how SSL can be extended to pose Skyline queries, present and evaluate alternative algorithms to implement the Skyline operation, and show how this operation can be combined with other database operations, e.g., join.) <|cite_end|>. The closest in spirit to our work is <|cite_start|> (Reference: Probabilistic Skyline Operator over Sliding Windows: ) <|cite_end|>. They considered a stream of uncertain
objects to model uncertainty in measurement. Each object has an
associated set of possible instances and they are interested in the
objects whose probability of being dominated by another object is at
most some $q$ supplied by the database user.
Online sample selection is closely related to secretary problems,
however there are some key differences. In secretary problems (see,
e.g., <|cite_start|> (Reference: Who solved the secretary problem.: In Martin Gardner's Mathematical Games column in the February 1960 issue of Scientific American, there appeared a simple problem that has come to be known today as the Secretary Problem, or the Marriage Problem. It has since been taken up and developed by many eminent probabilists and statisticians and has been extended and generalized in many different directions so that now one can say that it constitutes a "field" within mathematics-probability-optimization. The object of this article is partly historical (to give a fresh view of the origins of the problem, touching upon Cayley and Kepler), partly review of the field (listing the subfields of recent interest), partly serious (to answer the question posed in the title), and partly entertainment. The contents of this paper were first given as the Allen T. Craig lecture at the University of Iowa, 1988.) <|cite_end|> <|cite_start|> (Reference: The secretary problem and its extensions: a review: Summary The development of what has come to be known as the secretary problem is traced from its origins in the early 1960's. All published work to date on the problem and its extensions is reviewed.) <|cite_end|> <|cite_start|> (Reference: Matroid Secretary Problems: We define a generalization of the classical secretary problem called the matroid secretary problem. In this problem, the elements of a matroid are presented to an online algorithm in uniformly random order. When an element arrives, the algorithm observes its value and must make an irrevocable decision whether or not to accept it. The accepted elements must form an independent set, and the objective is to maximize the combined value of these elements. We present an O(log k)-competitive algorithm for general matroids (where k is the rank of the matroid), and constant-competitive algorithms for several special cases including graphic matroids, truncated partition matroids, and bounded degree transversal matroids. We leave as an open question the existence of constant-competitive algorithms for general matroids. Our results have applications in welfare-maximizing online mechanism design for domains in which the sets of simultaneously satisfiable agents form a matroid.) <|cite_end|>) there is typically a fixed bound on
the desired number of hires. In our setting the selection process is
ongoing and we must pick more and more samples as time passes. This
makes the tradeoff between the rate of hiring and the rate of
improvement of quality interesting.
Finally, while our goal is to analyze a class of online algorithms in
comparison to the optimal offline algorithms, our approach is
different from the competitive analysis of online algorithms <|cite_start|> (Reference: Online computation and competitive analysis: Preface 1. Introduction to competitive analysis: the list accessing problem 2. Introduction to randomized algorithms: the list accessing problem 3. Paging: deterministic algorithms 4. Paging: randomized algorithms 5. Alternative models for paging: beyond pure competitive analysis 6. Game theoretic foundations 7. Request - answer games 8. Competitive analysis and zero-sum games 9. Metrical task systems 10. The k-server problem 11. Randomized k-server algorithms 12. Load-balancing 13. Call admission and circuit-routing 14. Search, trading and portfolio selection 15. Competitive analysis and decision making under uncertainty Appendices Bibliography Index.) <|cite_end|>. In competitive analysis the goal is to perform nearly as
well as the optimal offline algorithm for {\em any arbitrary} sequence
of input. In contrast, we bound the {\em expected} competitive ratio
of the rules we study. Furthermore, a crucial aspect of the strategies
that we study is that not only are they online, but they are also
non-adaptive or oblivious. That is, the current acceptance threshold
does not depend on the samples seen by the algorithm so far. In this
sense, our model is closer in spirit to work on oblivious algorithms
(see, e.g., <|cite_start|> (Reference: Universal Approximations for TSP, Steiner Tree, and Set Cover: We introduce a notion of universality in the context of optimization problems with partial information. Universality is a framework for dealing with uncertainty by guaranteeing a certain quality of goodness for all possible completions of the partial information set. Universal variants of optimization problems can be defined that are both natural and well-motivated. We consider universal versions of three classical problems: TSP, Steiner Tree and Set Cover.We present a polynomial-time algorithm to find a universal tour on a given metric space over n vertices such that for any subset of the vertices, the sub-tour induced by the subset is within O(log4n/log log n) of an optimal tour for the subset. Similarly, we show that given a metric space over n vertices and a root vertex, we can find a universal spanning tree such that for any subset of vertices containing the root, the sub-tree induced by the subset is within O(log4n/log log n) of an optimal Steiner tree for the subset. Our algorithms rely on a new notion of sparse partitions, that may be of independent interest. For the special case of doubling metrics, which includes both constant-dimensional Euclidean and growth-restricted metrics, our algorithms achieve an O(log n) upper bound. We complement our results for the universal Steiner tree problem with a lower bound of Ω(log n/log log n) that holds even for n vertices on the plane. We also show that a slight generalization of the universal Steiner Tree problem is coNP-hard and present nearly tight upper and lower bounds for a universal version of Set Cover.) <|cite_end|> <|cite_start|> (Reference: Optimal Oblivious Routing in Polynomial Time: A recent seminal result of Racke is that for any network there is an oblivious routing algorithm with a polylog competitive ratio with respect to congestion. Unfortunately, Racke's construction is not polynomial time. We give a polynomial time construction that guarantee's Racke's bounds, and more generally gives the true optimal ratio for any network.) <|cite_end|> <|cite_start|> (Reference: Oblivious network design: Consider the following network design problem: given a network <i>G = (V, E)</i>, source-sink pairs {<i>s</i><inf><i>i</i></inf>, <i>t</i><inf><i>i</i></inf>} arrive and desire to send a unit of flow between themselves. The cost of the routing is this: if edge <i>e</i> carries a total of <i>f</i><inf><i>e</i></inf> flow (from all the terminal pairs), the cost is given by Σ <inf><i>e</i></inf><i>l</i>(<i>f</i><inf><i>e</i></inf>), where <i>l</i> is some concave cost function; the goal is to minimize the total cost incurred. However, we want the routing to be <i>oblivious</i>: when terminal pair {<i>s</i><inf><i>i</i></inf>, <i>t</i><inf><i>i</i></inf>} makes its routing decisions, it does not know the current flow on the edges of the network, nor the identity of the other pairs in the system. Moreover, it does not even know the identity of the function <i>l</i>, merely knowing that <i>l</i> is a concave function of the total flow on the edge. How should it (obliviously) route its one unit of flow? Can we get competitive algorithms for this problem?In this paper, we develop a framework to model <i>oblivious network design</i> problems (of which the above problem is a special case), and give algorithms with poly-logarithmic competitive ratio for problems in this framework (and hence for this problem). Abstractly, given a problem like the one above, the solution is a multicommodity flow producing a "load" on each edge of <i>L</i><inf><i>e</i></inf> = <i>l</i>(<i>f</i><inf>1</inf>(<i>e</i>),<i>f</i><inf>2</inf>(<i>e</i>), ..., <i>f</i><inf><i>k</i></inf>(<i>e</i>)), and the total cost is given by an "aggregation function" agg (<i>L</i><inf><i>e</i>1</inf>,...,<i>L</i><inf><i>em</i></inf>) of the loads of all edges. Our goal is to develop oblivious algorithms that approximately minimize the total cost of the routing, knowing the aggregation function agg, but merely knowing that <i>l</i> lies in some class C, and having no other information about the current state of the network. Hence we want algorithms that are simultaneously "function-oblivious" as well as "traffic-oblivious".The aggregation functions we consider are the max and σ objective functions, which correspond to the well-known measures of <i>congestion</i> and <i>total cost</i> of a network; in this paper, we prove the following:• If the aggregation function is Σ, we give an oblivious algorithm with <i>O</i>(log<sup>2</sup> <i>n</i>) competitive ratio whenever the load function <i>l</i> is in the class of <i>monotone sub-additive functions</i>. (Recall that our algorithm is also "function-oblivious"; it works whenever each edge has a load function <i>l</i> in the class.)• For the case when the aggregation function is max, we give an oblivious algorithm with <i>O</i>(log<sup>2</sup> <i>n</i> log log <i>n</i>) competitive ratio, when the load function <i>l</i> is a <i>norm;</i> we also show that such a competitive ratio is not possible for general sub-additive functions.These are the first such general results about oblivious algorithms for network design problems, and we hope the ideas and techniques will lead to more and improved results in this area.) <|cite_end|>). Oblivious
algorithms are highly desirable in practical settings because the
rules can be hard-wired into the selection process, making them
very easy
to implement. The caveat is, of course, that for many
optimization problems oblivious algorithms do not provide good
approximations. Surprisingly, we show that in many scenarios related
to sample selection, obliviousness has only a small cost. <|paper_end|> | [
"<|reference_start|> Probabilistic Skyline Operator over Sliding Windows: <|reference_end|>",
"<|reference_start|> Who solved the secretary problem.: In Martin Gardner's Mathematical Games column in the February 1960 issue of Scientific American, there appeared a simple problem that has come to be known today as the Secretary Problem, or the Marriage Problem. It has since been taken up and developed by many eminent probabilists and statisticians and has been extended and generalized in many different directions so that now one can say that it constitutes a \"field\" within mathematics-probability-optimization. The object of this article is partly historical (to give a fresh view of the origins of the problem, touching upon Cayley and Kepler), partly review of the field (listing the subfields of recent interest), partly serious (to answer the question posed in the title), and partly entertainment. The contents of this paper were first given as the Allen T. Craig lecture at the University of Iowa, 1988. <|reference_end|>",
"<|reference_start|> Universal Approximations for TSP, Steiner Tree, and Set Cover: We introduce a notion of universality in the context of optimization problems with partial information. Universality is a framework for dealing with uncertainty by guaranteeing a certain quality of goodness for all possible completions of the partial information set. Universal variants of optimization problems can be defined that are both natural and well-motivated. We consider universal versions of three classical problems: TSP, Steiner Tree and Set Cover.We present a polynomial-time algorithm to find a universal tour on a given metric space over n vertices such that for any subset of the vertices, the sub-tour induced by the subset is within O(log4n/log log n) of an optimal tour for the subset. Similarly, we show that given a metric space over n vertices and a root vertex, we can find a universal spanning tree such that for any subset of vertices containing the root, the sub-tree induced by the subset is within O(log4n/log log n) of an optimal Steiner tree for the subset. Our algorithms rely on a new notion of sparse partitions, that may be of independent interest. For the special case of doubling metrics, which includes both constant-dimensional Euclidean and growth-restricted metrics, our algorithms achieve an O(log n) upper bound. We complement our results for the universal Steiner tree problem with a lower bound of Ω(log n/log log n) that holds even for n vertices on the plane. We also show that a slight generalization of the universal Steiner Tree problem is coNP-hard and present nearly tight upper and lower bounds for a universal version of Set Cover. <|reference_end|>",
"<|reference_start|> Optimal Oblivious Routing in Polynomial Time: A recent seminal result of Racke is that for any network there is an oblivious routing algorithm with a polylog competitive ratio with respect to congestion. Unfortunately, Racke's construction is not polynomial time. We give a polynomial time construction that guarantee's Racke's bounds, and more generally gives the true optimal ratio for any network. <|reference_end|>"
] | [
7,
8,
12,
13
] | {"<|cite_1|>": "ss-1933497", "<|cite_2|>": "ss-1933497", "<|cite_3|>": "ss-833473", "<|cite_4|>": "ss-1933497", "<|cite_5|>": "ss-922371", "<|cite_6|>": "ss-1130359", "<|cite_7|>": "ss-1285567", "<|cite_8|>": "ss-1418231", "<|multi_cite_9_1|>": "ss-825167", "<|multi_cite_9_2|>": "ss-899089", "<|multi_cite_9_3|>": "ss-1296859", "<|cite_10|>": "ss-917488", "<|multi_cite_11_1|>": "ss-778233", "<|multi_cite_11_2|>": "ss-1365151", "<|multi_cite_11_3|>": "ss-1185189"} |
2003.13045 | <|paper_start|> Title: Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation
Abstract: Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation: Unsupervised learning of optical flow, which leverages the supervision from view synthesis, has emerged as a promising alternative to supervised methods. However, the objective of unsupervised learning is likely to be unreliable in challenging scenes. In this work, we present a framework to use more reliable supervision from transformations. It simply twists the general unsupervised learning pipeline by running another forward pass with transformed data from augmentation, along with using transformed predictions of original data as the self-supervision signal. Besides, we further introduce a lightweight network with multiple frames by a highly-shared flow decoder. Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods. Also, our method achieves competitive results to recent fully supervised methods while with much fewer parameters.
Introduction
\label{sec:1}
Optical flow, as a motion description of images, has been widely used in high-level video tasks <|cite_start|> (Reference: Deep Flow-Guided Video Inpainting: Video inpainting, which aims at filling in missing regions of a video, remains challenging due to the difficulty of preserving the precise spatial and temporal coherence of video contents. In this work we propose a novel flow-guided video inpainting approach. Rather than filling in the RGB pixels of each frame directly, we consider video inpainting as a pixel propagation problem. We first synthesize a spatially and temporally coherent optical flow field across video frames using a newly designed Deep Flow Completion network. Then the synthesized flow field is used to guide the propagation of pixels to fill up the missing regions in the video. Specifically, the Deep Flow Completion network follows a coarse-to-fine refinement to complete the flow fields, while their quality is further improved by hard flow example mining. Following the guide of the completed flow, the missing video regions can be filled up precisely. Our method is evaluated on DAVIS and YouTube-VOS datasets qualitatively and quantitatively, achieving the state-of-the-art performance in terms of inpainting quality and speed.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Moving Object Detection via Contextual Information Separation: We propose an adversarial contextual model for detecting moving objects in images. A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible. The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning. Although our method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets. Our model can be thought of as a generalization of classical variational generative region-based segmentation, but in a way that avoids explicit regularization or solution of partial differential equations at run-time.) <|cite_end|> <|cite_start|> (Reference: Deep Feature Flow for Video Recognition: Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition.) <|cite_end|> <|cite_start|> (Reference: SegFlow: Joint Learning for Video Object Segmentation and Optical Flow: This paper proposes an end-to-end trainable network, SegFlow, for simultaneously predicting pixel-wise object segmentation and optical flow in videos. The proposed SegFlow has two branches where useful information of object segmentation and optical flow is propagated bidirectionally in a unified framework. The segmentation branch is based on a fully convolutional network, which has been proved effective in image segmentation task, and the optical flow branch takes advantage of the FlowNet model. The unified framework is trained iteratively offline to learn a generic notion, and fine-tuned online for specific objects. Extensive experiments on both the video object segmentation and optical flow datasets demonstrate that introducing optical flow improves the performance of segmentation and vice versa, against the state-of-the-art algorithms.) <|cite_end|> <|cite_start|> (Reference: Coherent Online Video Style Transfer: Training a feed-forward network for fast neural style transfer of images is proven to be successful. However, the naive extension to process video frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near real-time. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures the consistency over larger period of time. Our network can incorporate different image stylization networks. We show that the proposed method clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitudes faster in runtime.) <|cite_end|> <|cite_start|> (Reference: Semantic Video Segmentation by Gated Recurrent Flow Propagation: Semantic video segmentation is challenging due to the sheer amount of data that needs to be processed and labeled in order to construct accurate models. In this paper we present a deep, end-to-end trainable methodology to video segmentation that is capable of leveraging information present in unlabeled data in order to improve semantic estimates. Our model combines a convolutional architecture and a spatio-temporal transformer recurrent layer that are able to temporally propagate labeling information by means of optical flow, adaptively gated based on its locally estimated uncertainty. The flow, the recognition and the gated temporal propagation modules can be trained jointly, end-to-end. The temporal, gated recurrent flow propagation component of our model can be plugged into any static semantic segmentation architecture and turn it into a weakly supervised video processing one. Our extensive experiments in the challenging CityScapes and Camvid datasets, and based on multiple deep architectures, indicate that the resulting model can leverage unlabeled temporal frames, next to a labeled one, in order to improve both the video segmentation accuracy and the consistency of its temporal labeling, at no additional annotation cost and with little extra computation.) <|cite_end|>.
Benefitting from the growth of deep learning, learning-based optical flow methods <|cite_start|> (Reference: Models matter, so does training: An empirical study of cnns for optical flow estimation.: We investigate two crucial and closely-related aspects of CNNs for optical flow estimation: models and training. First, we design a compact but effective CNN model, called PWC-Net, according to simple and well-established principles: pyramidal processing, warping, and cost volume processing. PWC-Net is 17 times smaller in size, 2 times faster in inference, and 11 percent more accurate on Sintel final than the recent FlowNet2 model. It is the winning entry in the optical flow competition of the robust vision challenge. Next, we experimentally analyze the sources of our performance gains. In particular, we use the same training procedure for PWC-Net to retrain FlowNetC, a sub-network of FlowNet2. The retrained FlowNetC is 56 percent more accurate on Sintel final than the previously trained one and even 5 percent more accurate than the FlowNet2 model. We further improve the training procedure and increase the accuracy of PWC-Net on Sintel by 10 percent and on KITTI 2012 and 2015 by 20 percent. Our newly trained model parameters and training protocols are available on https://github.com/NVlabs/PWC-Net.) <|cite_end|> <|cite_start|> (Reference: Continual Occlusion and Optical Flow Estimation: ) <|cite_end|> with considerable accuracy and efficient inference are gradually replacing the classical variational-based approaches <|cite_start|> (Reference: A Fusion Approach for Multi-Frame Optical Flow Estimation: To date, top-performing optical flow estimation methods only take pairs of consecutive frames into account. While elegant and appealing, the idea of using more than two frames has not yet produced state-of-the-art results. We present a simple, yet effective fusion approach for multi-frame optical flow that benefits from longer-term temporal cues. Our method first warps the optical flow from previous frames to the current, thereby yielding multiple plausible estimates. It then fuses the complementary information carried by these estimates into a new optical flow field. At the time of writing, our method ranks first among published results in the MPI Sintel and KITTI 2015 benchmarks. Our models will be available on https://github.com/NVlabs/PWC-Net.) <|cite_end|> <|cite_start|> (Reference: ProFlow: Learning to Predict Optical Flow: Temporal coherence is a valuable source of information in the context of optical flow estimation. However, finding a suitable motion model to leverage this information is a non-trivial task. In this paper we propose an unsupervised online learning approach based on a convolutional neural network (CNN) that estimates such a motion model individually for each frame. By relating forward and backward motion these learned models not only allow to infer valuable motion information based on the backward flow, they also help to improve the performance at occlusions, where a reliable prediction is particularly useful. Moreover, our learned models are spatially variant and hence allow to estimate non-rigid motion per construction. This, in turns, allows to overcome the major limitation of recent rigidity-based approaches that seek to improve the estimation by incorporating additional stereo/SfM constraints. Experiments demonstrate the usefulness of our new approach. They not only show a consistent improvement of up to 27% for all major benchmarks (KITTI 2012, KITTI 2015, MPI Sintel) compared to a baseline without prediction, they also show top results for the MPI Sintel benchmark -- the one of the three benchmarks that contains the largest amount of non-rigid motion.) <|cite_end|> <|cite_start|> (Reference: Optical Flow in Mostly Rigid Scenes: The optical flow of natural scenes is a combination of the motion of the observer and the independent motion of objects. Existing algorithms typically focus on either recovering motion and structure under the assumption of a purely static world or optical flow for general unconstrained scenes. We combine these approaches in an optical flow algorithm that estimates an explicit segmentation of moving objects from appearance and physical constraints. In static regions we take advantage of strong constraints to jointly estimate the camera motion and the 3D structure of the scene over multiple frames. This allows us to also regularize the structure instead of the motion. Our formulation uses a Plane+Parallax framework, which works even under small baselines, and reduces the motion estimation to a one-dimensional search problem, resulting in more accurate estimation. In moving regions the flow is treated as unconstrained, and computed with an existing optical flow method. The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art results on both the MPI-Sintel and KITTI-2015 benchmarks.) <|cite_end|>. However, it is tough to collect the ground truth of dense optical flow in reality, which makes most supervised methods heavily dependent on the large-scale synthetic datasets <|cite_start|> (Reference: FlowNet: Learning Optical Flow with Convolutional Networks: Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.) <|cite_end|> <|cite_start|> (Reference: A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation: Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.) <|cite_end|>, and the domain difference leads to an underlying degradation when the model is transferred to the real-world.
In another point of view, many works proposed to learn optical flow in an unsupervised way <|cite_start|> (Reference: {Unsupervised deep learning for optical flow estimation: Recent work has shown that optical flow estimation can be formulated as a supervised learning problem. Moreover, convolutional networks have been successfully applied to this task. However, supervised flow learning is obfuscated by the shortage of labeled training data. As a consequence, existing methods have to turn to large synthetic datasets for easily computer generated ground truth. In this work, we explore if a deep network for flow estimation can be trained without supervision. Using image warping by the estimated flow, we devise a simple yet effective unsupervised method for learning optical flow, by directly minimizing photometric consistency. We demonstrate that a flow network can be trained from end-to-end using our unsupervised scheme. In some cases, our results come tantalizingly close to the performance of methods trained with full supervision.) <|cite_end|> <|cite_start|> (Reference: UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss: In the era of end-to-end deep learning, many advances in computer vision are driven by large amounts of labeled data. In the optical flow setting, however, obtaining dense per-pixel ground truth for real scenes is difficult and thus such data is rare. Therefore, recent end-to-end convolutional networks for optical flow rely on synthetic datasets for supervision, but the domain mismatch between training and test scenarios continues to be a challenge. Inspired by classical energy-based optical flow methods, we design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow. On the KITTI benchmarks, our unsupervised approach outperforms previous unsupervised deep networks by a large margin, and is even more accurate than similar supervised methods trained on synthetic datasets alone. By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.) <|cite_end|> <|cite_start|> (Reference: Occlusion Aware Unsupervised Learning of Optical Flow: It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.) <|cite_end|>, in which the ground truth is not necessary. These works aim to train networks with objective from view synthesis <|cite_start|> (Reference: Unsupervised Learning of Depth and Ego-Motion from Video: We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.) <|cite_end|> <|cite_start|> (Reference: GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose: We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.) <|cite_end|>, \ie optimizing the difference between reference images and the flow warped target images. This objective is based on the assumption of brightness constancy, which will be violated for challenging scenes, \eg with extreme brightness or partial occlusion. Hence, proper regularization such as occlusion handling <|cite_start|> (Reference: Occlusion Aware Unsupervised Learning of Optical Flow: It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Learning of Multi-Frame Optical Flow with Occlusions: ) <|cite_end|> or local smooth <|cite_start|> (Reference: UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss: In the era of end-to-end deep learning, many advances in computer vision are driven by large amounts of labeled data. In the optical flow setting, however, obtaining dense per-pixel ground truth for real scenes is difficult and thus such data is rare. Therefore, recent end-to-end convolutional networks for optical flow rely on synthetic datasets for supervision, but the domain mismatch between training and test scenarios continues to be a challenge. Inspired by classical energy-based optical flow methods, we design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow. On the KITTI benchmarks, our unsupervised approach outperforms previous unsupervised deep networks by a large margin, and is even more accurate than similar supervised methods trained on synthetic datasets alone. By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.) <|cite_end|> is required. Recent studies have focused on more complicate regularizations such as 3D geometry constraints <|cite_start|> (Reference: Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation: We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.) <|cite_end|> <|cite_start|> (Reference: Supplementary Materials for UnOS: Unified Unsupervised Optical-flow and Stereo-depth Estimation by Watching Videos: The whole system was implemented using Tensorflow [1]. When minimizing Eq. 5 in the RDVO module, we used the 3D representation described as in the last line of Eq. 6. When calculating the rigid-aware flow consistency loss term in Eq. 9, the 2D representation in the first line of Eq. 6 was adopted. In the first stage of training, the smoothness loss of optical flow was applied across the entire image, i.e. Lfs = Ls(Ft→s, 1, 2). In the third stage of training, the smoothness loss of optical flow was only applied on the moving region, i.e. Lfs = Ls(Ft→s, 1−Mt, 2)) <|cite_end|> <|cite_start|> (Reference: Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity: Scene flow estimation in the dynamic scene remains a challenging task. Computing scene flow by a combination of 2D optical flow and depth has shown to be considerably faster with acceptable performance. In this work, we present a unified framework for joint unsupervised learning of stereo depth and optical flow with explicit local rigidity to estimate scene flow. We estimate camera motion directly by a Perspective-n-Point method from the optical flow and depth predictions, with RANSAC outlier rejection scheme. In order to disambiguate the object motion and the camera motion in the scene, we distinguish the rigid region by the re-project error and the photometric similarity. By joint learning with the local rigidity, both depth and optical networks can be refined. This framework boosts all four tasks: depth, optical flow, camera motion estimation, and object motion segmentation. Through the evaluation on the KITTI benchmark, we show that the proposed framework achieves state-of-the-art results amongst unsupervised methods. Our models and code are available at https://github.com/lliuz/unrigidflow.) <|cite_end|> and global epipolar constraints <|cite_start|> (Reference: Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes: Unsupervised deep learning for optical flow computation has achieved promising results. Most existing deep-net based methods rely on image brightness consistency and local smoothness constraint to train the networks. Their performance degrades at regions where repetitive textures or occlusions occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical flow method which incorporates global geometric constraints into network learning. In particular, we investigate multiple ways of enforcing the epipolar constraint in flow estimation. To alleviate a "chicken-and-egg" type of problem encountered in dynamic scenes where multiple motions may be present, we propose a low-rank constraint as well as a union-of-subspaces constraint for training. Experimental results on various benchmarking datasets show that our method achieves competitive performance compared with supervised methods and outperforms state-of-the-art unsupervised deep-learning methods.) <|cite_end|>. As shown in~\cref{fig:performance}, there is still a large gap between these works and supervised methods.
In this paper, we do not rely on the geometrical regularizations but rethink the task itself to improve accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{pic/fig_performance_s.pdf}
\caption{{Timeline of average end-point error (AEPE) advances in deep optical flow.} Marker size indicates network size, and oversized markers have been adjusted. Our method outperforms all of the previous unsupervised methods, also yields comparable accuracy to supervised methods while with fewer parameters. $^{\dagger}$ indicates the model using more than two frames.}
\label{fig:performance}
\vspace{-10pt}
\end{figure}
Interestingly, we notice that almost all of the unsupervised works, such as <|cite_start|> (Reference: Occlusion Aware Unsupervised Learning of Optical Flow: It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.) <|cite_end|> <|cite_start|> (Reference: Supplementary Materials for UnOS: Unified Unsupervised Optical-flow and Stereo-depth Estimation by Watching Videos: The whole system was implemented using Tensorflow [1]. When minimizing Eq. 5 in the RDVO module, we used the 3D representation described as in the last line of Eq. 6. When calculating the rigid-aware flow consistency loss term in Eq. 9, the 2D representation in the first line of Eq. 6 was adopted. In the first stage of training, the smoothness loss of optical flow was applied across the entire image, i.e. Lfs = Ls(Ft→s, 1, 2). In the third stage of training, the smoothness loss of optical flow was only applied on the moving region, i.e. Lfs = Ls(Ft→s, 1−Mt, 2)) <|cite_end|>, avoid using a heavy combination of augmentations, even if it has been proven effective in supervised flow works <|cite_start|> (Reference: {FlowNET 2.0: Evolution of Optical Flow Estimation with Deep Networks: The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.) <|cite_end|> <|cite_start|> (Reference: PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume: We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the cur- rent optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024x436) images. Our models are available on https://github.com/NVlabs/PWC-Net.) <|cite_end|> <|cite_start|> (Reference: Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation: Deep learning approaches to optical flow estimation have seen rapid progress over the recent years. One common trait of many networks is that they refine an initial flow estimate either through multiple stages or across the levels of a coarse-to-fine representation. While leading to more accurate results, the downside of this is an increased number of parameters. Taking inspiration from both classical energy minimization approaches as well as residual networks, we propose an iterative residual refinement (IRR) scheme based on weight sharing that can be combined with several backbone networks. It reduces the number of parameters, improves the accuracy, or even achieves both. Moreover, we show that integrating occlusion prediction and bi-directional flow estimation into our IRR scheme can further boost the accuracy. Our full network achieves state-of-the-art results for both optical flow and occlusion estimation across several standard datasets.) <|cite_end|>.
The reason we conclude is two-fold: \emph{(i)} Data augmentation is essentially a trade-off between diversity and validity. It can improve the model by increasing the diversity of data, while also leads to a shift of data distribution which decreases the accuracy. In unsupervised learning, the benefit of diversity is limited since the abundant training data is easy to access. \emph{(ii)}
Data augmentation will generate challenging samples, for which view synthesis is more likely to be unreliable, so the objective cannot guide networks for a correct solution.
More recently, there are some works based on knowledge distillation that alleviate the problem of unreliable objective in occluded regions <|cite_start|> (Reference: DDFlow: Learning Optical Flow with Unlabeled Data Distillation: We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time.) <|cite_end|>. The training of these methods is split into two stages. In the first stage, a teacher model is trained to make predictions on original data, and offline creating occluded samples with random crop or mask out. In the second stage, these artificial samples from the teacher model are used to update a student model. However, these methods were designed for the case of partial occluded only. Hence we ask: \emph{Can we generalize the distillation of occlusion to other transformation cases?} Moreover, the distillation method has a bottleneck due to the frozen teacher model. We thus ask: \emph{Can we jointly optimize teacher model and student model, or just training a single network?}
In this work, we address the above two questions with a novel unsupervised learning framework of optical flow. Specifically, for the first question, diverse transformations are used to generate challenging scenes such as low-light, overexposed, with large displacement or partial occlusion. For the second question, instead of optimizing two models with distillation, we simply twist the training step in the regular learning framework by running an additional forward with the input of transformed images, and the transformed flow from the first forward pass is treated as reliable supervision.
Since the self-supervision from transformations avoids the unsupervised objective to be ambiguous in challenging scenes,
our framework allows the network to learn by analogy with the original samples, and gradually mastering the ability to handle challenging samples.
In summary, our contributions are: \emph{(i)} We propose a novel way to make use of the self-supervision signal from abundant augmentations for unsupervised optical flow by only training a single network;
\emph{(ii)} We demonstrate the applicability of our method for various augmentation methods. In addition to occlusion, we develop a general form for more challenging transformations.
\emph{(iii)} Our method leads in a leap of performance among deep unsupervised methods. It also achieves a comparable performance \wrt previous supervised methods, but with much fewer parameters and excellent cross dataset generalization capability.
Related Work
\paragraph{Supervised Optical Flow.}
Starting from FlowNet <|cite_start|> (Reference: FlowNet: Learning Optical Flow with Convolutional Networks: Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.) <|cite_end|>, various networks for optical flow with supervised learning have been proposed, \eg FlowNet2 <|cite_start|> (Reference: {FlowNET 2.0: Evolution of Optical Flow Estimation with Deep Networks: The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.) <|cite_end|>, PWC-Net <|cite_start|> (Reference: PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume: We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the cur- rent optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024x436) images. Our models are available on https://github.com/NVlabs/PWC-Net.) <|cite_end|>, IRR-PWC <|cite_start|> (Reference: Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation: Deep learning approaches to optical flow estimation have seen rapid progress over the recent years. One common trait of many networks is that they refine an initial flow estimate either through multiple stages or across the levels of a coarse-to-fine representation. While leading to more accurate results, the downside of this is an increased number of parameters. Taking inspiration from both classical energy minimization approaches as well as residual networks, we propose an iterative residual refinement (IRR) scheme based on weight sharing that can be combined with several backbone networks. It reduces the number of parameters, improves the accuracy, or even achieves both. Moreover, we show that integrating occlusion prediction and bi-directional flow estimation into our IRR scheme can further boost the accuracy. Our full network achieves state-of-the-art results for both optical flow and occlusion estimation across several standard datasets.) <|cite_end|>. These methods are comparable in accuracy to well-designed variational methods <|cite_start|> (Reference: A Fusion Approach for Multi-Frame Optical Flow Estimation: To date, top-performing optical flow estimation methods only take pairs of consecutive frames into account. While elegant and appealing, the idea of using more than two frames has not yet produced state-of-the-art results. We present a simple, yet effective fusion approach for multi-frame optical flow that benefits from longer-term temporal cues. Our method first warps the optical flow from previous frames to the current, thereby yielding multiple plausible estimates. It then fuses the complementary information carried by these estimates into a new optical flow field. At the time of writing, our method ranks first among published results in the MPI Sintel and KITTI 2015 benchmarks. Our models will be available on https://github.com/NVlabs/PWC-Net.) <|cite_end|> <|cite_start|> (Reference: ProFlow: Learning to Predict Optical Flow: Temporal coherence is a valuable source of information in the context of optical flow estimation. However, finding a suitable motion model to leverage this information is a non-trivial task. In this paper we propose an unsupervised online learning approach based on a convolutional neural network (CNN) that estimates such a motion model individually for each frame. By relating forward and backward motion these learned models not only allow to infer valuable motion information based on the backward flow, they also help to improve the performance at occlusions, where a reliable prediction is particularly useful. Moreover, our learned models are spatially variant and hence allow to estimate non-rigid motion per construction. This, in turns, allows to overcome the major limitation of recent rigidity-based approaches that seek to improve the estimation by incorporating additional stereo/SfM constraints. Experiments demonstrate the usefulness of our new approach. They not only show a consistent improvement of up to 27% for all major benchmarks (KITTI 2012, KITTI 2015, MPI Sintel) compared to a baseline without prediction, they also show top results for the MPI Sintel benchmark -- the one of the three benchmarks that contains the largest amount of non-rigid motion.) <|cite_end|>, and are more effective during inference. However, the success of supervised methods heavily dependent on the large scale synthetic datasets <|cite_start|> (Reference: A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation: Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.) <|cite_end|> <|cite_start|> (Reference: FlowNet: Learning Optical Flow with Convolutional Networks: Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.) <|cite_end|>, which leads to an underlying degradation when transferring to real-world applications. As an alternative, we dig into the unsupervised method to alleviate the need for ground truth of dense optical flow.
\myparagraph{Unsupervised Optical Flow.} Yu \etal <|cite_start|> (Reference: Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness: Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious la- beling. To bypass these challenges, we propose an unsuper- vised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow be- tween two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empiri- cally, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.) <|cite_end|> first introduced a method for learning optical flow with brightness constancy and motion smoothness, which is similar to the energy minimization in conventional methods. Further researches improve accuracy through occlusion reasoning <|cite_start|> (Reference: Occlusion Aware Unsupervised Learning of Optical Flow: It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.) <|cite_end|> <|cite_start|> (Reference: UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss: In the era of end-to-end deep learning, many advances in computer vision are driven by large amounts of labeled data. In the optical flow setting, however, obtaining dense per-pixel ground truth for real scenes is difficult and thus such data is rare. Therefore, recent end-to-end convolutional networks for optical flow rely on synthetic datasets for supervision, but the domain mismatch between training and test scenarios continues to be a challenge. Inspired by classical energy-based optical flow methods, we design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow. On the KITTI benchmarks, our unsupervised approach outperforms previous unsupervised deep networks by a large margin, and is even more accurate than similar supervised methods trained on synthetic datasets alone. By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth.) <|cite_end|>, multi-frame extension <|cite_start|> (Reference: Unsupervised Learning of Multi-Frame Optical Flow with Occlusions: ) <|cite_end|> <|cite_start|> (Reference: Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM: Most of current Convolution Neural Network (CNN) based methods for optical flow estimation focus on learning optical flow on synthetic datasets with groundtruth, which is not practical. In this paper, we propose an unsupervised optical flow estimation framework named PCLNet. It uses pyramid Convolution LSTM (ConvLSTM) with the constraint of adjacent frame reconstruction, which allows flexibly estimating multi-frame optical flows from any video clip. Besides, by decoupling motion feature learning and optical flow representation, our method avoids complex short-cut connections used in existing frameworks while improving accuracy of optical flow estimation. Moreover, different from those methods using specialized CNN architectures for capturing motion, our framework directly learns optical flow from the features of generic CNNs and thus can be easily embedded in any CNN based frameworks for other tasks. Extensive experiments have verified that our method not only estimates optical flow effectively and accurately, but also obtains comparable performance on action recognition.) <|cite_end|>, epipolar constraint <|cite_start|> (Reference: Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes: Unsupervised deep learning for optical flow computation has achieved promising results. Most existing deep-net based methods rely on image brightness consistency and local smoothness constraint to train the networks. Their performance degrades at regions where repetitive textures or occlusions occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical flow method which incorporates global geometric constraints into network learning. In particular, we investigate multiple ways of enforcing the epipolar constraint in flow estimation. To alleviate a "chicken-and-egg" type of problem encountered in dynamic scenes where multiple motions may be present, we propose a low-rank constraint as well as a union-of-subspaces constraint for training. Experimental results on various benchmarking datasets show that our method achieves competitive performance compared with supervised methods and outperforms state-of-the-art unsupervised deep-learning methods.) <|cite_end|>, 3D geometrical constraints with monocular depth <|cite_start|> (Reference: DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency: We present an unsupervised learning framework for simultaneously training single-view depth prediction and optical flow estimation models using unlabeled video sequences. Existing unsupervised methods often exploit brightness constancy and spatial smoothness priors to train depth or flow models. In this paper, we propose to leverage geometric consistency as additional supervisory signals. Our core idea is that for rigid regions we can use the predicted scene depth and camera motion to synthesize 2D optical flow by backprojecting the induced 3D scene flow. The discrepancy between the rigid flow (from depth prediction and camera motion) and the estimated flow (from optical flow model) allows us to impose a cross-task consistency loss. While all the networks are jointly optimized during training, they can be applied independently at test time. Extensive experiments demonstrate that our depth and flow models compare favorably with state-of-the-art unsupervised methods.) <|cite_end|> <|cite_start|> (Reference: GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose: We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.) <|cite_end|> <|cite_start|> (Reference: Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation: We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.) <|cite_end|> and stereo depth <|cite_start|> (Reference: Supplementary Materials for UnOS: Unified Unsupervised Optical-flow and Stereo-depth Estimation by Watching Videos: The whole system was implemented using Tensorflow [1]. When minimizing Eq. 5 in the RDVO module, we used the 3D representation described as in the last line of Eq. 6. When calculating the rigid-aware flow consistency loss term in Eq. 9, the 2D representation in the first line of Eq. 6 was adopted. In the first stage of training, the smoothness loss of optical flow was applied across the entire image, i.e. Lfs = Ls(Ft→s, 1, 2). In the third stage of training, the smoothness loss of optical flow was only applied on the moving region, i.e. Lfs = Ls(Ft→s, 1−Mt, 2)) <|cite_end|> <|cite_start|> (Reference: Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity: Scene flow estimation in the dynamic scene remains a challenging task. Computing scene flow by a combination of 2D optical flow and depth has shown to be considerably faster with acceptable performance. In this work, we present a unified framework for joint unsupervised learning of stereo depth and optical flow with explicit local rigidity to estimate scene flow. We estimate camera motion directly by a Perspective-n-Point method from the optical flow and depth predictions, with RANSAC outlier rejection scheme. In order to disambiguate the object motion and the camera motion in the scene, we distinguish the rigid region by the re-project error and the photometric similarity. By joint learning with the local rigidity, both depth and optical networks can be refined. This framework boosts all four tasks: depth, optical flow, camera motion estimation, and object motion segmentation. Through the evaluation on the KITTI benchmark, we show that the proposed framework achieves state-of-the-art results amongst unsupervised methods. Our models and code are available at https://github.com/lliuz/unrigidflow.) <|cite_end|>. Although these methods have become complicated, there is still a large gap with state-of-the-art supervised methods. Recent works improve the performance by learning the flow of occluded pixels in a knowledge distillation manner <|cite_start|> (Reference: DDFlow: Learning Optical Flow with Unlabeled Data Distillation: We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time.) <|cite_end|>, while the two-stage training in these works is trivial. Instead of studying the complicated geometrical constraints, our approach focuses on the basic training strategy. It generalizes the case of occlusion distillation to more kinds of challenging scenes with a straightforward single-stage learning framework.
\myparagraph{Learning with Augmentation.} Data augmentation is one of the easiest ways to improve training. Recently, there has been something new about integrating augmentation into the learning frameworks. Mounsaveng~\etal <|cite_start|> (Reference: Adversarial Learning of General Transformations for Data Augmentation: Data augmentation (DA) is fundamental against overfitting in large convolutional neural networks, especially with a limited training dataset. In images, DA is usually based on heuristic transformations, like geometric or color transformations. Instead of using predefined transformations, our work learns data augmentation directly from the training data by learning to transform images with an encoder-decoder architecture combined with a spatial transformer network. The transformed images still belong to the same class but are new, more complex samples for the classifier. Our experiments show that our approach is better than previous generative data augmentation methods, and comparable to predefined transformation methods when training an image classifier.) <|cite_end|> and Xiao \etal <|cite_start|> (Reference: Spatially Transformed Adversarial Examples: Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the $\mathcal{L}_p$ distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of $\mathcal{L}_p$ distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.) <|cite_end|> suggested learning data augmentation with a spatial transformer network <|cite_start|> (Reference: Spatial Transformer Networks: Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.) <|cite_end|> to generate more complex samples. Xie~\etal <|cite_start|> (Reference: Unsupervised Data Augmentation for Consistency Training: Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.) <|cite_end|> proposed to use augmentation in the semi-supervised tasks by consistency training. Peng~\etal <|cite_start|> (Reference: Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation: Random data augmentation is a critical technique to avoid overfitting in training deep neural network models. However, data augmentation and network training are usually treated as two isolated processes, limiting the effectiveness of network training. Why not jointly optimize the two? We propose adversarial data augmentation to address this limitation. The main idea is to design an augmentation network (generator) that competes against a target network (discriminator) by generating `hard' augmentation operations online. The augmentation network explores the weaknesses of the target network, while the latter learns from `hard' augmentations to achieve better performance. We also design a reward/penalty strategy for effective joint training. We demonstrate our approach on the problem of human pose estimation and carry out a comprehensive experimental analysis, showing that our method can significantly improve state-of-the-art models without additional data efforts.) <|cite_end|> introduced to optimize data augmentation with the training of task-specific networks jointly. As a new trend in AutoML, several efforts to automatically search for the best policy of augmentations <|cite_start|> (Reference: AutoAugment: Learning Augmentation Policies from Data: Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is 0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error rate of 1.5%, which is 0.6% better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.) <|cite_end|> <|cite_start|> (Reference: Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules: A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46%, which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at https://github.com/arcelien/pba.) <|cite_end|> <|cite_start|> (Reference: Fast AutoAugment: Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet.) <|cite_end|> are proposed. All these methods aimed at supervised or semi-supervised learning. In this work, we present a simple yet effective approach to integrate abundant augmentations with unsupervised optical flow. We propose to use reliable predictions of original samples as a self-supervision signal to guide the predictions of augmented samples. <|paper_end|> | [
"<|reference_start|> Continual Occlusion and Optical Flow Estimation: <|reference_end|>",
"<|reference_start|> UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss: In the era of end-to-end deep learning, many advances in computer vision are driven by large amounts of labeled data. In the optical flow setting, however, obtaining dense per-pixel ground truth for real scenes is difficult and thus such data is rare. Therefore, recent end-to-end convolutional networks for optical flow rely on synthetic datasets for supervision, but the domain mismatch between training and test scenarios continues to be a challenge. Inspired by classical energy-based optical flow methods, we design an unsupervised loss based on occlusion-aware bidirectional flow estimation and the robust census transform to circumvent the need for ground truth flow. On the KITTI benchmarks, our unsupervised approach outperforms previous unsupervised deep networks by a large margin, and is even more accurate than similar supervised methods trained on synthetic datasets alone. By optionally fine-tuning on the KITTI training data, our method achieves competitive optical flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling generic pre-training of supervised networks for datasets with limited amounts of ground truth. <|reference_end|>",
"<|reference_start|> PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume: We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the cur- rent optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024x436) images. Our models are available on https://github.com/NVlabs/PWC-Net. <|reference_end|>",
"<|reference_start|> A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation: Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network. <|reference_end|>"
] | [
7,
20,
28,
37
] | {"<|multi_cite_1_1|>": "arxiv-203172", "<|multi_cite_1_2|>": "arxiv-187211", "<|multi_cite_1_3|>": "arxiv-110852", "<|multi_cite_1_4|>": "arxiv-135150", "<|multi_cite_1_5|>": "arxiv-120135", "<|multi_cite_1_6|>": "arxiv-113391", "<|multi_cite_2_1|>": "ss-1098333", "<|multi_cite_2_2|>": "ss-1101698", "<|multi_cite_3_1|>": "arxiv-177326", "<|multi_cite_3_2|>": "arxiv-161052", "<|multi_cite_3_3|>": "arxiv-123173", "<|multi_cite_4_1|>": "arxiv-76716", "<|multi_cite_4_2|>": "arxiv-88658", "<|multi_cite_5_1|>": "ss-1105362", "<|multi_cite_5_2|>": "arxiv-140808", "<|multi_cite_5_3|>": "arxiv-140258", "<|multi_cite_6_1|>": "arxiv-122565", "<|multi_cite_6_2|>": "arxiv-150700", "<|multi_cite_7_1|>": "arxiv-140258", "<|multi_cite_7_2|>": "ss-785201", "<|cite_8|>": "arxiv-140808", "<|multi_cite_9_1|>": "arxiv-159906", "<|multi_cite_9_2|>": "ss-1421362", "<|multi_cite_9_3|>": "ss-825743", "<|cite_10|>": "arxiv-198799", "<|multi_cite_11_1|>": "arxiv-140258", "<|multi_cite_11_3|>": "ss-1421362", "<|multi_cite_12_1|>": "ss-692072", "<|multi_cite_12_2|>": "arxiv-133926", "<|multi_cite_12_3|>": "arxiv-199335", "<|multi_cite_13_1|>": "arxiv-192729", "<|cite_14|>": "arxiv-76716", "<|cite_15|>": "ss-692072", "<|cite_16|>": "arxiv-133926", "<|cite_17|>": "arxiv-199335", "<|multi_cite_18_1|>": "arxiv-177326", "<|multi_cite_18_2|>": "arxiv-161052", "<|multi_cite_19_1|>": "arxiv-88658", "<|multi_cite_19_2|>": "arxiv-76716", "<|cite_20|>": "arxiv-104306", "<|multi_cite_21_1|>": "arxiv-140258", "<|multi_cite_21_2|>": "arxiv-140808", "<|multi_cite_22_1|>": "ss-785201", "<|multi_cite_22_2|>": "arxiv-216259", "<|cite_23|>": "arxiv-198799", "<|multi_cite_24_1|>": "arxiv-171497", "<|multi_cite_24_2|>": "arxiv-150700", "<|multi_cite_24_3|>": "arxiv-159906", "<|multi_cite_25_1|>": "ss-1421362", "<|multi_cite_25_2|>": "ss-825743", "<|multi_cite_26_1|>": "arxiv-192729", "<|cite_27|>": "arxiv-224897", "<|cite_28|>": "arxiv-144928", "<|cite_29|>": "arxiv-78899", "<|cite_30|>": "arxiv-202012", "<|cite_31|>": "arxiv-159876", "<|multi_cite_32_1|>": "arxiv-159818", "<|multi_cite_32_2|>": "arxiv-204047", "<|multi_cite_32_3|>": "arxiv-202322"} |
2404.10078 | <|paper_start|> Title: Low-Light Image Enhancement Framework for Improved Object Detection in Fisheye Lens Datasets
Abstract: Low-Light Image Enhancement Framework for Improved Object Detection in Fisheye Lens Datasets: This study addresses the evolving challenges in urban traffic monitoring detection systems based on fisheye lens cameras by proposing a framework that improves the efficacy and accuracy of these systems. In the context of urban infrastructure and transportation management, advanced traffic monitoring systems have become critical for managing the complexities of urbanization and increasing vehicle density. Traditional monitoring methods, which rely on static cameras with narrow fields of view, are ineffective in dynamic urban environments, necessitating the installation of multiple cameras, which raises costs. Fisheye lenses, which were recently introduced, provide wide and omnidirectional coverage in a single frame, making them a transformative solution. However, issues such as distorted views and blurriness arise, preventing accurate object detection on these images. Motivated by these challenges, this study proposes a novel approach that combines a ransformer-based image enhancement framework and ensemble learning technique to address these challenges and improve traffic monitoring accuracy, making significant contributions to the future of intelligent traffic management systems. Our proposed methodological framework won 5th place in the 2024 AI City Challenge, Track 4, with an F1 score of 0.5965 on experimental validation data. The experimental results demonstrate the effectiveness, efficiency, and robustness of the proposed system. Our code is publicly available at https://github.com/daitranskku/AIC2024-TRACK4-TEAM15.
Introduction
\label{sec:intro}
In the field of urban infrastructure and transportation management, the development of advanced traffic monitoring systems has become a crucial solution to the growing challenges posed by urbanization and increasing vehicular density <|cite_start|> (Reference: An improved YOLO-based road traffic monitoring system: ) <|cite_end|> <|cite_start|> (Reference: Artificial Intelligence-Enabled Traffic Monitoring System: Manual traffic surveillance can be a daunting task as Traffic Management Centers operate a myriad of cameras installed over a network. Injecting some level of automation could help lighten the workload of human operators performing manual surveillance and facilitate making proactive decisions which would reduce the impact of incidents and recurring congestion on roadways. This article presents a novel approach to automatically monitor real time traffic footage using deep convolutional neural networks and a stand-alone graphical user interface. The authors describe the results of research received in the process of developing models that serve as an integrated framework for an artificial intelligence enabled traffic monitoring system. The proposed system deploys several state-of-the-art deep learning algorithms to automate different traffic monitoring needs. Taking advantage of a large database of annotated video surveillance data, deep learning-based models are trained to detect queues, track stationary vehicles, and tabulate vehicle counts. A pixel-level segmentation approach is applied to detect traffic queues and predict severity. Real-time object detection algorithms coupled with different tracking systems are deployed to automatically detect stranded vehicles as well as perform vehicular counts. At each stage of development, interesting experimental results are presented to demonstrate the effectiveness of the proposed system. Overall, the results demonstrate that the proposed framework performs satisfactorily under varied conditions without being immensely impacted by environmental hazards such as blurry camera views, low illumination, rain, or snow.) <|cite_end|> <|cite_start|> (Reference: NAFSSR: Stereo Image Super-Resolution Using NAFNet: Stereo image super-resolution aims at enhancing the quality of super-resolution results by utilizing the complementary information provided by binocular systems. To obtain reasonable performance, most methods focus on finely designing modules, loss functions, and etc. to exploit information from another viewpoint. This has the side effect of increasing system complexity, making it difficult for researchers to evaluate new ideas and compare methods. This paper inherits a strong and simple image restoration model, NAFNet, for single-view feature extraction and extends it by adding cross attention modules to fuse features between views to adapt to binocular scenarios. The proposed baseline for stereo image super-resolution is noted as NAFSSR. Furthermore, training/testing strategies are proposed to fully exploit the performance of NAFSSR. Extensive experiments demonstrate the effectiveness of our method. In particular, NAFSSR outperforms the state-of-the-art methods on the KITTI 2012, KITTI 2015, Middlebury, and Flickr1024 datasets. With NAFSSR, we won 1st place in the NTIRE 2022 Stereo Image Super-resolution Challenge. Codes and models will be released at https://github.com/megvii-research/NAFNet.) <|cite_end|> <|cite_start|> (Reference: Multi-Purpose, Multi-Step Deep Learning Framework for Network-Level Traffic Flow Prediction: ) <|cite_end|>. These systems, utilizing state-of-the-art technologies such as computer vision, machine learning, and data analytics, are tasked with ensuring not only the smooth flow of traffic but also enhancing safety <|cite_start|> (Reference: A Vision-based System for Traffic Anomaly Detection using Deep Learning and Decision Trees: Any intelligent traffic monitoring system must be able to detect anomalies such as traffic accidents in real time. In this paper, we propose a Decision-Tree - enabled approach powered by Deep Learning for extracting anomalies from traffic cameras while accurately estimating the start and end time of the anomalous event. Our approach included creating a detection model, followed by anomaly detection and analysis. YOLOv5 served as the foundation for our detection model. The anomaly detection and analysis step entail traffic scene background estimation, road mask extraction, and adaptive thresholding. Candidate anomalies were passed through a decision tree to detect and analyze final anomalies. The proposed approach yielded an F1 score of 0.8571, and an S4 score of 0.5686, per the experimental validation.) <|cite_end|> <|cite_start|> (Reference: Forest-Fire Response System Using Deep-Learning-Based Approaches With CCTV Images and Weather Data: An effective forest-fire response is critical for minimizing the losses caused by forest fires. The purpose of this study is to construct a model for early fire detection and damage area estimation for response systems based on deep learning. First, a large-scale fire dataset with approximately 400,000 images is used to train and test object-detection models. The optimal backbone for the faster region-based convolutional neural network (Faster R-CNN) model is determined using a DetNAS-based architecture search algorithm. Then, the searched light-weight backbone is compared with well-known backbones, such as ResNet, VoVNet, and FBNetV3. In addition, data pertaining to six years of historical forest fire events are employed to estimate the damaged area. Subsequently, a weather API is used to match the recorded events. A Bayesian neural network (BNN) model is used as a regression model to estimate the damaged area. Additionally, the trained model is compared with other widely used regression models, such as decision trees and neural networks. The Faster R-CNN with a searched backbone achieves a mean average precision of 27.9 on 40,000 testing images, outperforming existing backbones. Compared with other regression models, the BNN estimates the damage area with less error and increased generalization. Thus, both proposed models demonstrate their robustness and suitability for implementation in real-world systems.) <|cite_end|> <|cite_start|> (Reference: Leveraging future trajectory prediction for multi-camera people tracking: Artificial intelligence-based surveillance system, one of the essential systems for smart cities, plays a critical role in ensuring the safety and well-being of individuals. In this paper, we propose a real-time, low-computation cost Multi-Camera Multi-Target (MCMT) tracking system for people, leveraging deep-learning-based trajectory prediction with spatial-temporal information and social information. By predicting people’s future trajectories, our algorithm effectively handles object occlusion problems and maintains accurate tracking while keeping computational costs low. Our approach addresses object occlusion without relying on computationally expensive re-identification, and improves MCMT tracking performance using graph-based tracklet representation, and spectral clustering. As a result, our proposed approach is tested on the 2023 AI City Challenge Track 1 test dataset, automatically generated on the NVIDIA Omiverse Platform, our method achieves an IDF1 score of 0.6171 and real-time performance at 27.6 FPS. Code and pre-trained models are publicly available at https://github.com/yuntaeJ/SCIT-MCMT-Tracking.) <|cite_end|> <|cite_start|> (Reference: Damage-map estimation using UAV images and deep learning algorithms for disaster management system: Estimating the damaged area after a forest fire is important for responding to this natural catastrophe. With the support of aerial remote sensing, typically with unmanned aerial vehicles (UAVs), the aerial imagery of forest-fire areas can be easily obtained; however, retrieving the burnt area from the image is still a challenge. We implemented a new approach for segmenting burnt areas from UAV images using deep learning algorithms. First, the data were collected from a forest fire in Andong, the Republic of Korea, in April 2020. Then, the proposed two-patch-level deep-learning models were implemented. A patch-level 1 network was trained using the UNet++ architecture. The output prediction of this network was used as a position input for the second network, which used UNet. It took the reference position from the first network as its input and refined the results. Finally, the final performance of our proposed method was compared with a state-of-the-art image-segmentation algorithm to prove its robustness. Comparative research on the loss functions was also performed. Our proposed approach demonstrated its effectiveness in extracting burnt areas from UAV images and can contribute to estimating maps showing the areas damaged by forest fires.) <|cite_end|> and efficiency on busy roadways <|cite_start|> (Reference: Development of an IoT based real-time traffic monitoring system for city governance: ) <|cite_end|>. As cities grow in size and population, there is an increasing need for advanced traffic monitoring and management solutions that surpass traditional strategies.
Traditional traffic monitoring methods, which rely on static cameras with limited fields of view (FoV) <|cite_start|> (Reference: Traffic flow estimation with data from a video surveillance camera: ) <|cite_end|> <|cite_start|> (Reference: Traffic Congestion Detection from Surveillance Videos using Deep Learning: Countless cameras, both public and private, have been installed in recent years for the objectives of surveillance, the monitoring of anomalous human activities, and traffic surveillance. Numerous worrisome and aberrant actions, such as theft, aggression, and accidents, make it difficult to notice and recognise such behaviour in a real-world setting. The topic of this study is car wrecks as depicted in online videos of traffic. Modern traffic monitoring and surveillance rely heavily on video traffic surveillance cameras (VTSS). Consequences of a rapidly expanding human population include a higher frequency of accidental injuries. The VTSS is employed to identify unusual occurrences on various roads and highways, such as traffic congestion and car accidents. When accidents happen on lengthy roadways or in remote areas, victims are often powerless and some don't make it. The purpose of this study is to provide a method for automatically identifying incidents in surveillance footage. Convolutional-neural-networks (CNNs), a specific deep learning approach developed to cope with grid-like data, have been shown to be useful in image and video processing, according to a study of the relevant literature. This study use a rolling prediction method and convolutional neural networks (CNNs) to detect accidents in VTSS footage. A dataset of anomalous photographs, called the Vehicle Accident Image Dataset (VAID), was created and used in the training of the CNN model. The proposed method was put through its paces by analysing data gathered from running the trained CNN model on a number of different films. This study's findings demonstrate a 93% success rate in identifying traffic accident incidents in films from traffic surveillance systems.) <|cite_end|> <|cite_start|> (Reference: Video Anomaly Detection for Pedestrian Surveillance: ) <|cite_end|>, have proven insufficient in dealing with the dynamic and complex nature of modern urban environments. These cameras typically provide narrow perspectives of roadways and intersections, requiring the deployment of multiple cameras to achieve comprehensive coverage <|cite_start|> (Reference: FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection: With the advance of AI, road object detection has been a prominent topic in computer vision, mostly using perspective cameras. Fisheye lens provides omnidirectional wide coverage for using fewer cameras to monitor road intersections, however with view distortions. To our knowledge, there is no existing open dataset prepared for traffic surveillance on fisheye cameras. This paper introduces an open FishEye8K benchmark dataset for road object detection tasks, which comprises 157K bounding boxes across five classes (Pedestrian, Bike, Car, Bus, and Truck). In addition, we present benchmark results of State-of-The-Art (SoTA) models, including variations of YOLOv5, YOLOR, YOLO7, and YOLOv8. The dataset comprises 8,000 images recorded in 22 videos using 18 fisheye cameras for traffic monitoring in Hsinchu, Taiwan, at resolutions of 1080$\times$1080 and 1280$\times$1280. The data annotation and validation process were arduous and time-consuming, due to the ultra-wide panoramic and hemispherical fisheye camera images with large distortion and numerous road participants, particularly people riding scooters. To avoid bias, frames from a particular camera were assigned to either the training or test sets, maintaining a ratio of about 70:30 for both the number of images and bounding boxes in each class. Experimental results show that YOLOv8 and YOLOR outperform on input sizes 640$\times$640 and 1280$\times$1280, respectively. The dataset will be available on GitHub with PASCAL VOC, MS COCO, and YOLO annotation formats. The FishEye8K benchmark will provide significant contributions to the fisheye video analytics and smart city applications.) <|cite_end|>. This not only raises the cost and complexity of surveillance infrastructure but also creates blind spots and gaps in monitoring, particularly in areas with complex road layouts or high traffic volumes. The need for real-time insights, comprehensive coverage, and adaptive response mechanisms has highlighted the importance of more advanced and versatile surveillance techniques.
In recent years, the introduction of fisheye lenses has revolutionized surveillance and traffic monitoring systems due to their ability to provide natural, wide, and omnidirectional coverage <|cite_start|> (Reference: FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection: With the advance of AI, road object detection has been a prominent topic in computer vision, mostly using perspective cameras. Fisheye lens provides omnidirectional wide coverage for using fewer cameras to monitor road intersections, however with view distortions. To our knowledge, there is no existing open dataset prepared for traffic surveillance on fisheye cameras. This paper introduces an open FishEye8K benchmark dataset for road object detection tasks, which comprises 157K bounding boxes across five classes (Pedestrian, Bike, Car, Bus, and Truck). In addition, we present benchmark results of State-of-The-Art (SoTA) models, including variations of YOLOv5, YOLOR, YOLO7, and YOLOv8. The dataset comprises 8,000 images recorded in 22 videos using 18 fisheye cameras for traffic monitoring in Hsinchu, Taiwan, at resolutions of 1080$\times$1080 and 1280$\times$1280. The data annotation and validation process were arduous and time-consuming, due to the ultra-wide panoramic and hemispherical fisheye camera images with large distortion and numerous road participants, particularly people riding scooters. To avoid bias, frames from a particular camera were assigned to either the training or test sets, maintaining a ratio of about 70:30 for both the number of images and bounding boxes in each class. Experimental results show that YOLOv8 and YOLOR outperform on input sizes 640$\times$640 and 1280$\times$1280, respectively. The dataset will be available on GitHub with PASCAL VOC, MS COCO, and YOLO annotation formats. The FishEye8K benchmark will provide significant contributions to the fisheye video analytics and smart city applications.) <|cite_end|>. This unique feature addresses a significant limitation of traditional cameras with narrow fields of view (FoV), allowing for the capture of large scenes in a single frame—an accomplishment not possible with conventional counterparts. Fisheye lenses in traffic monitoring systems have proven particularly advantageous in reducing the number of required cameras, offering a cost-effective solution to cover broader views of streets and intersections <|cite_start|> (Reference: Fast Vehicle Detection and Tracking on Fisheye Traffic Monitoring Video using Motion Trail: We develop a vehicle detection and tracking scheme based on the concept of motion trails for fisheye traffic monitoring videos. The motion trail combines the moving object traces in several frames into one image. Because it collects information from multiple frames, the accuracy of detecting a trail is higher than a single-frame object detector. Essentially, it merges the detection and tracking processes into one process. In addition, a lightweight neural net is sufficient to detect the trail, which saves computing time and memory. After detecting the trails, we extract individual car locations at each frame using a multi-head trail extractor. Then, a multi-modal bidirectional LSTM can further improve detection accuracy. We adopt the public ICIP2020 VIP Cup dataset for training and testing. Our approach is 14 percentage points (pp) better than the state-of-the-art single-frame rotated object detector (R3Det) on the challenging nighttime video, and it is 5 FPS faster in inference speed. Our scheme achieves the AP50 accuracy comparable with the state-of-the-art video object detector (MEGA), but its speed is 3 times faster, and its model size is only 28% of that of MEGA.) <|cite_end|>. \textit{However, this innovation comes with its own set of challenges, as fisheye cameras inherently present distorted views, which require sophisticated design approaches for image undistortion and unwarping <|cite_start|> (Reference: FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection: With the advance of AI, road object detection has been a prominent topic in computer vision, mostly using perspective cameras. Fisheye lens provides omnidirectional wide coverage for using fewer cameras to monitor road intersections, however with view distortions. To our knowledge, there is no existing open dataset prepared for traffic surveillance on fisheye cameras. This paper introduces an open FishEye8K benchmark dataset for road object detection tasks, which comprises 157K bounding boxes across five classes (Pedestrian, Bike, Car, Bus, and Truck). In addition, we present benchmark results of State-of-The-Art (SoTA) models, including variations of YOLOv5, YOLOR, YOLO7, and YOLOv8. The dataset comprises 8,000 images recorded in 22 videos using 18 fisheye cameras for traffic monitoring in Hsinchu, Taiwan, at resolutions of 1080$\times$1080 and 1280$\times$1280. The data annotation and validation process were arduous and time-consuming, due to the ultra-wide panoramic and hemispherical fisheye camera images with large distortion and numerous road participants, particularly people riding scooters. To avoid bias, frames from a particular camera were assigned to either the training or test sets, maintaining a ratio of about 70:30 for both the number of images and bounding boxes in each class. Experimental results show that YOLOv8 and YOLOR outperform on input sizes 640$\times$640 and 1280$\times$1280, respectively. The dataset will be available on GitHub with PASCAL VOC, MS COCO, and YOLO annotation formats. The FishEye8K benchmark will provide significant contributions to the fisheye video analytics and smart city applications.) <|cite_end|>. Additionally, objects at the edges or far ends of the captured scenes appear small and blurry. This makes it difficulty for object detection systems to accurately identify important elements such as cars, pedestrians, and road signs during traffic monitoring <|cite_start|> (Reference: FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection: With the advance of AI, road object detection has been a prominent topic in computer vision, mostly using perspective cameras. Fisheye lens provides omnidirectional wide coverage for using fewer cameras to monitor road intersections, however with view distortions. To our knowledge, there is no existing open dataset prepared for traffic surveillance on fisheye cameras. This paper introduces an open FishEye8K benchmark dataset for road object detection tasks, which comprises 157K bounding boxes across five classes (Pedestrian, Bike, Car, Bus, and Truck). In addition, we present benchmark results of State-of-The-Art (SoTA) models, including variations of YOLOv5, YOLOR, YOLO7, and YOLOv8. The dataset comprises 8,000 images recorded in 22 videos using 18 fisheye cameras for traffic monitoring in Hsinchu, Taiwan, at resolutions of 1080$\times$1080 and 1280$\times$1280. The data annotation and validation process were arduous and time-consuming, due to the ultra-wide panoramic and hemispherical fisheye camera images with large distortion and numerous road participants, particularly people riding scooters. To avoid bias, frames from a particular camera were assigned to either the training or test sets, maintaining a ratio of about 70:30 for both the number of images and bounding boxes in each class. Experimental results show that YOLOv8 and YOLOR outperform on input sizes 640$\times$640 and 1280$\times$1280, respectively. The dataset will be available on GitHub with PASCAL VOC, MS COCO, and YOLO annotation formats. The FishEye8K benchmark will provide significant contributions to the fisheye video analytics and smart city applications.) <|cite_end|>}. These challenges underscore the need for dedicated strategies to address distortions and blurriness during image processing, and that is what this study seeks to do.
Inspired by these challenges, the overarching goal of this study is to develop a robust framework for traffic monitoring using data from fisheye lens cameras. To achieve this goal, we propose a \textbf{\textit{Low-Light Image Enhancement Framework}} to enhance image quality, resulting in improved object detection accuracy for fisheye images. The proposed image enhancement framework aims to \textit{improve image clarity and accuracy in object detection by addressing poor visibility at night and blurriness in video-generated images}. To achieve a robust objection detection model, the study incorporates the principle of \textbf{\textit{ensemble learning}}, drawing upon diverse state-of-the-art object detection models for this task. By using the ensemble learning technique, we mitigate the limitations associated with using individual models for object detection tasks. The models utilized in this study include collaborative detection transformer (Co-DETR), You Only Look Once (YOLOv8x), and YOLOv9.
To this end, the study's main contributions can be summarized as follows:
\begin{enumerate}
\item We propose a unique data preprocessing framework called the \textbf{\textit{Low-Light Image Enhancement Framework}}. This framework utilizes a transformer-based image enhancement technique, NAFNET <|cite_start|> (Reference: NAFSSR: Stereo Image Super-Resolution Using NAFNet: Stereo image super-resolution aims at enhancing the quality of super-resolution results by utilizing the complementary information provided by binocular systems. To obtain reasonable performance, most methods focus on finely designing modules, loss functions, and etc. to exploit information from another viewpoint. This has the side effect of increasing system complexity, making it difficult for researchers to evaluate new ideas and compare methods. This paper inherits a strong and simple image restoration model, NAFNet, for single-view feature extraction and extends it by adding cross attention modules to fuse features between views to adapt to binocular scenarios. The proposed baseline for stereo image super-resolution is noted as NAFSSR. Furthermore, training/testing strategies are proposed to fully exploit the performance of NAFSSR. Extensive experiments demonstrate the effectiveness of our method. In particular, NAFSSR outperforms the state-of-the-art methods on the KITTI 2012, KITTI 2015, Middlebury, and Flickr1024 datasets. With NAFSSR, we won 1st place in the NTIRE 2022 Stereo Image Super-resolution Challenge. Codes and models will be released at https://github.com/megvii-research/NAFNet.) <|cite_end|>, to improve image clarity by removing blurriness, and GSAD <|cite_start|> (Reference: Global Structure-Aware Diffusion Process for Low-Light Image Enhancement: This paper studies a diffusion-based framework to address the low-light image enhancement problem. To harness the capabilities of diffusion models, we delve into this intricate process and advocate for the regularization of its inherent ODE-trajectory. To be specific, inspired by the recent research that low curvature ODE-trajectory results in a stable and effective diffusion process, we formulate a curvature regularization term anchored in the intrinsic non-local structures of image data, i.e., global structure-aware regularization, which gradually facilitates the preservation of complicated details and the augmentation of contrast during the diffusion process. This incorporation mitigates the adverse effects of noise and artifacts resulting from the diffusion process, leading to a more precise and flexible enhancement. To additionally promote learning in challenging regions, we introduce an uncertainty-guided regularization technique, which wisely relaxes constraints on the most extreme regions of the image. Experimental evaluations reveal that the proposed diffusion-based framework, complemented by rank-informed regularization, attains distinguished performance in low-light enhancement. The outcomes indicate substantial advancements in image quality, noise suppression, and contrast amplification in comparison with state-of-the-art methods. We believe this innovative approach will stimulate further exploration and advancement in low-light image processing, with potential implications for other applications of diffusion models. The code is publicly available at https://github.com/jinnh/GSAD.) <|cite_end|> to convert nighttime images (low illumination) to daytime images (high illumination) to improve object detection accuracy in fisheye images during model training. To enhance object detection accuracy during inference, the study used a super-resolution postprocessing technique to increase image pixels, as well as an \textbf{\textit{ensemble model technique}} for robust detection.
\item We performed a detailed comparative analysis of our proposed ensembled model to other state-of-the-art object detection models (Co-DETR, YOLOv8x, and YOLOv9e). By evaluating their performance in detecting objects from fisheye lens-captured images, we aimed to demonstrate the superiority of our proposed model over the current state-of-the-art models. In addition, we demonstrate that our pre- and post-processing techniques are effective in leading to improved object detection.
\item Our proposed approach showed its robustness in AICity Challenge Track 4, placing \textbf{5th out of 52 teams}.
\end{enumerate}
The experimental results of this study hold paramount importance in shaping the future of intelligent traffic monitoring systems, particularly those utilizing fisheye lens cameras. The proposed robust framework, anchored by the transformer-based image enhancement technique and enriched by ensemble learning, represents a significant stride towards overcoming the challenges posed by fisheye distortions in urban environments. These results offer valuable insights into the feasibility and real-world applicability of our approach, providing a tangible foundation for the advancement of traffic monitoring technology.
The remainder of the paper is structured as follows: In section 2, we present a discussion of related works. Section 3 discusses our methodological framework. In section 4, we present our data and experimental findings, which demonstrate the efficacy of our proposed method in detecting objects in fisheye lens cameras. In section 5, we discuss the implications of our findings and make suggestions for future research in this field.
Related Work
\label{sec:related}
Traffic surveillance has advanced significantly in recent years as a result of the convergence of computer vision, machine learning, and data analytics. Our ability to accurately detect and track vehicles, pedestrians, and motorists in surveillance videos is critical for ensuring road safety, optimizing traffic flow, and improving overall transportation efficiency. Object detection, a critical task in traffic surveillance systems, has advanced dramatically as new algorithms and techniques emerge. This literature review focuses on three techniques or algorithms of object detection: multiple/two-stage detectors, single-stage detectors, and transformer-based models.
\subsection{Multiple/Two-stage detectors}
A variety of studies have investigated the utilization of two-stage detection algorithms in transportation systems <|cite_start|> (Reference: Enhancing Traffic Safety with Parallel Dense Video Captioning for End-to-End Event Analysis: This paper introduces our solution for Track 2 in AI City Challenge 2024. The task aims to solve traffic safety description and analysis with the dataset of Woven Traffic Safety (WTS), a real-world Pedestrian-Centric Traffic Video Dataset for Fine-grained Spatial-Temporal Understanding. Our solution mainly focuses on the following points: 1) To solve dense video captioning, we leverage the framework of dense video captioning with parallel decoding (PDVC) to model visual-language sequences and generate dense caption by chapters for video. 2) Our work leverages CLIP to extract visual features to more efficiently perform cross-modality training between visual and textual representations. 3) We conduct domain-specific model adaptation to mitigate domain shift problem that poses recognition challenge in video understanding. 4) Moreover, we leverage BDD-5K captioned videos to conduct knowledge transfer for better understanding WTS videos and more accurate captioning. Our solution has yielded on the test set, achieving 6th place in the competition. The open source code will be available at https://github.com/UCF-SST-Lab/AICity2024CVPRW) <|cite_end|> <|cite_start|> (Reference: Traffic object detection and recognition based on the attentional visual field of drivers: Traffic object detection and recognition systems play an essential role in Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV). In this research, we focus on four important classes of traffic objects: traffic signs, road vehicles, pedestrians, and traffic lights. We first review the major traditional machine learning and deep learning methods that have been used in the literature to detect and recognize these objects. We provide a vision-based framework that detects and recognizes traffic objects inside and outside the attentional visual area of drivers. This approach uses the driver 3D absolute coordinates of the gaze point obtained by the combined, cross-calibrated use of a front-view stereo imaging system and a non-contact 3D gaze tracker. A combination of multi-scale HOG-SVM and Faster R-CNN-based models are utilized in the detection stage. The recognition stage is performed with a ResNet-101 network to verify sets of generated hypotheses. We applied our approach on real data collected during drives in an urban environment with the RoadLAB instrumented vehicle. Our framework achieved 91% of correct object detections and provided promising results in the object recognition stage.) <|cite_end|> <|cite_start|> (Reference: A hybrid method of vehicle detection based on computer vision for intelligent transportation system: In this paper, a two-step approach for vehicles detection is proposed. The first step of approach is to approximate vehicles’ potential locations through searching for shadow area of vehicle low-part. In order to find these shadows, Haar-like feature with Adaboost was used to train a Haar detector offline and the relearning process with hard training samples is applied to increase detection rate. Based on the previous processing, ROI (Region of interest) + HOG + SVM algorithm is used for vehicle verification. At last, K-means approach is used to combine the similar detection results. The experimental results proved that our system could be used for real-time preceding vehicle detection robustly and accurately.) <|cite_end|>. Shirpour et al. developed a real-time traffic object detection system, achieving 91\% accuracy by employing a combination of multi-scale HOG-SVM and Faster R-CNN models <|cite_start|> (Reference: Traffic object detection and recognition based on the attentional visual field of drivers: Traffic object detection and recognition systems play an essential role in Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV). In this research, we focus on four important classes of traffic objects: traffic signs, road vehicles, pedestrians, and traffic lights. We first review the major traditional machine learning and deep learning methods that have been used in the literature to detect and recognize these objects. We provide a vision-based framework that detects and recognizes traffic objects inside and outside the attentional visual area of drivers. This approach uses the driver 3D absolute coordinates of the gaze point obtained by the combined, cross-calibrated use of a front-view stereo imaging system and a non-contact 3D gaze tracker. A combination of multi-scale HOG-SVM and Faster R-CNN-based models are utilized in the detection stage. The recognition stage is performed with a ResNet-101 network to verify sets of generated hypotheses. We applied our approach on real data collected during drives in an urban environment with the RoadLAB instrumented vehicle. Our framework achieved 91% of correct object detections and provided promising results in the object recognition stage.) <|cite_end|>. Also, Nizar et al. utilized HOG and SVM for feature extraction and KLT for object counting, achieving an average accuracy of 95.15\% [18]. Wang \& Zhang proposed a hybrid method for vehicle detection, integrating shadow area search with ROI, HOG, and SVM algorithms, along with K-means clustering <|cite_start|> (Reference: A hybrid method of vehicle detection based on computer vision for intelligent transportation system: In this paper, a two-step approach for vehicles detection is proposed. The first step of approach is to approximate vehicles’ potential locations through searching for shadow area of vehicle low-part. In order to find these shadows, Haar-like feature with Adaboost was used to train a Haar detector offline and the relearning process with hard training samples is applied to increase detection rate. Based on the previous processing, ROI (Region of interest) + HOG + SVM algorithm is used for vehicle verification. At last, K-means approach is used to combine the similar detection results. The experimental results proved that our system could be used for real-time preceding vehicle detection robustly and accurately.) <|cite_end|>. Gavrila introduced a two-step approach for pedestrian detection, leveraging contour features and hierarchical template matching in the first step, and intensity features and pattern classification in the second step <|cite_start|> (Reference: Pedestrian Detection from a Moving Vehicle: ) <|cite_end|>. Additionally, Zhang proposed a vision-based method for vehicle detection, featuring an improved common region algorithm for background subtraction and a threshold segmentation method for object extraction, achieving enhanced accuracy and stability compared to existing algorithms <|cite_start|> (Reference: Boosting dense SIFT descriptors and shape contexts of face images for gender recognition: In this paper, we propose a novel face representation in which a face is represented in terms of dense Scale Invariant Feature Transform (d-SIFT) and shape contexts of the face image. The application of the representation in gender recognition has been investigated. There are four problems when applying the SIFT to facial gender recognition. (1) There may be only a few keypoints that can be found in a face image due to the missing texture and poorly illuminated faces; (2) The SIFT descriptors at the keypoints (we called it sparse SIFT) are distinctive whereas alternative descriptors at non-keypoints (e.g. grid) could cause negative impact on the accuracy; (3) Relatively larger image size is required to obtain sufficient keypoints support the matching and (4) The matching assumes that the faces are properly registered. This paper addresses these difficulties using a combination of SIFT descriptors and shape contexts of face images. Instead of extracting descriptors around interest points only, local feature descriptors are extracted at regular image grid points that allow for a dense description of the face images. In addition, the global shape contexts of the face images are fused with the dense SIFT to improve the accuracy. AdaBoost is adopted to select features and form a strong classifier. The proposed approach is then applied to solve the problem of gender recognition. The experimental results on a large set of faces showed that the proposed method can achieve high accuracies even for faces that are not aligned.) <|cite_end|>. Collectively, these studies underscore the potential of two-stage detection algorithms in enhancing the accuracy and robustness of object detection in transportation systems.
\subsection{Single-stage detectors}
Recent advancements in object detection have witnessed the emergence of single-stage detection algorithms, offering simpler and faster alternatives to traditional two-stage methods. Ye et al introduced the feature-enhanced single-shot detector (FE-SSD) for railway traffic, significantly improving feature discrimination and robustness <|cite_start|> (Reference: Autonomous railway traffic object detection using feature-enhanced single-shot detector: With the high growth rates of railway transportation, it is extremely important to detect railway obstacles ahead of the train to ensure safety. Manual and traditional feature-extraction methods have been utilized in this scenario. There are also deep learning-based railway object detection approaches. However, in the case of a complex railway scene, these object detection approaches are either inefficient or have insufficient accuracy, particularly for small objects. To address this issue, we propose a feature-enhanced single-shot detector (FE-SSD). The proposed method inherits a prior detection module of RON and a feature transfer block of FB-Net. It also employs a novel receptive field-enhancement module. Through the integration of these three modules, the feature discrimination and robustness are significantly enhanced. Experimental results for a railway traffic dataset built by our team indicated that the proposed approach is superior to other SSD-derived models, particularly for small-object detection, while achieving real-time performance close to that of the SSD. The proposed method achieved a mean average precision of 0.895 and a frame rate of 38 frames per second on a railway traffic dataset with an input size of $320\times320$ pixels. The experimental results indicate that the proposed method can be used for real-world railway object detection.) <|cite_end|>. Alvarez et al proposed a monocular target detection system for transport infrastructures, incorporating vanishing point extraction for automatic camera calibration and a background subtraction method for object segmentation [7]. Stuparu et al presented a one-stage object detection model for vehicle detection in overhead satellite images, achieving high accuracy and low detection time <|cite_start|> (Reference: Vehicle detection in overhead satellite images using a one-stage object detection model: In order to improve the traffic in large cities and to avoid congestion, advanced methods of detecting and predicting vehicle behaviour are needed. Such methods require complex information regarding the number of vehicles on the roads, their positions, directions, etc. One way to obtain this information is by analyzing overhead images collected by satellites or drones, and extracting information from them through intelligent machine learning models. Thus, in this paper we propose and present a one-stage object detection model for finding vehicles in satellite images using the RetinaNet architecture and the Cars Overhead With Context dataset. By analyzing the results obtained by the proposed model, we show that it has a very good vehicle detection accuracy and a very low detection time, which shows that it can be employed to successfully extract data from real-time satellite or drone data.) <|cite_end|>. Qiu et al., further enhanced vehicle detection in intelligent transportation systems with a deep learning-based algorithm, achieving a 99.82\% recognition rate in real traffic scenes <|cite_start|> (Reference: Deep learning-based algorithm for vehicle detection in intelligent transportation systems: ) <|cite_end|>. Redmon et al. introduced YOLO, a novel approach to object detection that significantly revolutionizes single-stage object detection frameworks <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|> <|cite_start|> (Reference: Real-time Multi-Class Helmet Violation Detection Using Few-Shot Data Sampling Technique and YOLOv8: Traffic safety is a major global concern. Helmet usage is a key factor in preventing head injuries and fatalities caused by motorcycle accidents. However, helmet usage violations continue to be a significant problem. To identify such violations, automatic helmet detection systems have been proposed and implemented using computer vision techniques. Real-time implementation of such systems is crucial for traffic surveillance and enforcement, however, most of these systems are not real-time. This study proposes a robust real-time helmet violation detection system. The proposed system utilizes a unique data processing strategy, referred to as few-shot data sampling, to develop a robust model with fewer annotations, and a single-stage object detection model, YOLOv8 (You Only Look Once Version 8), for detecting helmet violations in real-time from video frames. Our proposed method won 7th place in the 2023 AI City Challenge, Track 5, with an mAP score of 0.5861 on experimental validation data. The experimental results demonstrate the effectiveness, efficiency, and robustness of the proposed system.) <|cite_end|> <|cite_start|> (Reference: A Region-Based Deep Learning Approach to Automated Retail Checkout: Automating the product checkout process at conventional retail stores is a task poised to have large impacts on society generally speaking. Towards this end, reliable deep learning models that enable automated product counting for fast customer checkout can make this goal a reality. In this work, we propose a novel, region-based deep learning approach to automate product counting using a customized YOLOv5 object detection pipeline and the DeepSORT algorithm. Our results on challenging, real-world test videos demonstrate that our method can generalize its predictions to a sufficient level of accuracy and with a fast enough runtime to warrant deployment to real-world commercial settings. Our proposed method won 4th place in the 2022 AI City Challenge, Track 4, with an F1 score of 0.4400 on experimental validation data.) <|cite_end|> <|cite_start|> (Reference: Real-Time Helmet Violation Detection Using YOLOv5 and Ensemble Learning: The proper enforcement of motorcycle helmet regulations is crucial for ensuring the safety of motorbike passengers and riders, as roadway cyclists and passengers are not likely to abide by these regulations if no proper enforcement systems are instituted. This paper presents the development and evaluation of a real-time YOLOv5 Deep Learning (DL) model for detecting riders and passengers on motorbikes, identifying whether the detected person is wearing a helmet. We trained the model on 100 videos recorded at 10 fps, each for 20 seconds. Our study demonstrated the applicability of DL models to accurately detect helmet regulation violators even in challenging lighting and weather conditions. We employed several data augmentation techniques in the study to ensure the training data is diverse enough to help build a robust model. The proposed model was tested on 100 test videos and produced an mAP score of 0.5267, ranking 11th on the AI City Track 5 public leaderboard. The use of deep learning techniques for image classification tasks, such as identifying helmet-wearing riders, has enormous potential for improving road safety. The study shows the potential of deep learning models for application in smart cities and enforcing traffic regulations and can be deployed in real-time for city-wide monitoring.) <|cite_end|> <|cite_start|> (Reference: Real-Time Helmet Violation Detection in AI City Challenge 2023 with Genetic Algorithm-Enhanced YOLOv5: This research focuses on real-time surveillance systems as a means for tackling the issue of non-compliance with helmet regulations, a practice that considerably amplifies the risk for motorcycle drivers or riders. Despite the well-established advantages of helmet usage, achieving widespread compliance remains challenging due to diverse contributing factors. To effectively address this concern, real-time monitoring and enforcement of helmet laws have been proposed as a plausible solution. However, previous attempts at real-time helmet violation detection have been hindered by their limited ability to operate in real-time. To overcome this limitation, the current paper introduces a novel real-time helmet violation detection system that utilizes the YOLOv5 single-stage object detection model. This model is trained on the 2023 NVIDIA AI City Challenge 2023 Track 5 dataset. The optimal hyperparameters for training the model are determined using genetic algorithms. Additionally, data augmentation and various sampling techniques are implemented to enhance the model's performance. The efficacy of the models is evaluated using precision, recall, and mean Average Precision (mAP) metrics. The results demonstrate impressive precision, recall, and mAP scores of 0.848, 0.599, and 0.641, respectively for the training data. Furthermore, the model achieves notable mAP score of 0.6667 for the test datasets, leading to a commendable 4th place rank in the public leaderboard. This innovative approach represents a notable breakthrough in the field and holds immense potential to substantially enhance motorcycle safety. By enabling real-time monitoring and enforcement capabilities, this system has the capacity to contribute towards increased compliance with helmet laws, thereby effectively reducing the risks faced by motorcycle riders and passengers.) <|cite_end|> <|cite_start|> (Reference: DeepSegmenter: Temporal Action Localization for Detecting Anomalies in Untrimmed Naturalistic Driving Videos: Identifying unusual driving behaviors exhibited by drivers during driving is essential for understanding driver behavior and the underlying causes of crashes. Previous studies have primarily approached this problem as a classification task, assuming that naturalistic driving videos come discretized. However, both activity segmentation and classification are required for this task due to the continuous nature of naturalistic driving videos. The current study therefore departs from conventional approaches and introduces a novel methodological framework, DeepSegmenter, that simultaneously performs activity segmentation and classification in a single framework. The proposed framework consists of four major modules namely Data Module, Activity Segmentation Module, Classification Module and Postprocessing Module. Our proposed method won 8th place in the 2023 AI City Challenge, Track 3, with an activity overlap score of 0.5426 on experimental validation data. The experimental results demonstrate the effectiveness, efficiency, and robustness of the proposed system.) <|cite_end|>. YOLO utilizes a single neural network to directly forecast bounding boxes and class probabilities from complete images in one assessment, enabling end-to-end optimization for detection efficacy.
\subsection{Transformer-based detectors}
Transformers have recently emerged as a significant advancement in computer vision, particularly in the realm of object detection. These models have introduced end-to-end learning systems and have been integrated into various architectures to enhance detection performance. Recent advancements in object detection have witnessed the emergence of transformer-based algorithms, which have demonstrated promising results in enhancing both accuracy and convergence time <|cite_start|> (Reference: A study on transformer-based object detection: This paper focuses on transformers based end-to-end object detection methods. End to end object detection is a new paradigm that has got attention in recent times. It does not require complex hand-engineered components such as non-max suppression to detect objects inside an image. Various methods are proposed to date to enhance fully end-to-end object detectors, most of them are based on the attention mechanism. In this work, we analyze some algorithms which involve transformers for the purpose of object detection. We discuss end-to-end models in which we have focused on Adaptive clustering-based transformers which solve attention encoder redundancy, Deformable Detection Transformers in which the attention module attends a limited collection of key sampling points, Unsupervisedly pre-trained Detection Transformers which are pre-trained on random query patches from the given image to improve accuracy and finally the Transformer-based Set Prediction using FCOS. These enhanced models not only improve the mean average precision of the model but also improves the total convergence time.) <|cite_end|>. A comprehensive review of object detection algorithms, encompassing transformer-based detectors, has highlighted substantial progress in the field, particularly in the era of deep learning <|cite_start|> (Reference: An Extendable, Efficient and Effective Transformer-based Object Detector: Transformers have been widely used in numerous vision problems especially for visual recognition and detection. Detection transformers are the first fully end-to-end learning systems for object detection, while vision transformers are the first fully transformer-based architecture for image classification. In this paper, we integrate Vision and Detection Transformers (ViDT) to construct an effective and efficient object detector. ViDT introduces a reconfigured attention module to extend the recent Swin Transformer to be a standalone object detector, followed by a computationally efficient transformer decoder that exploits multi-scale features and auxiliary techniques essential to boost the detection performance without much increase in computational load. In addition, we extend it to ViDT+ to support joint-task learning for object detection and instance segmentation. Specifically, we attach an efficient multi-scale feature fusion layer and utilize two more auxiliary training losses, IoU-aware loss and token labeling loss. Extensive evaluation results on the Microsoft COCO benchmark dataset demonstrate that ViDT obtains the best AP and latency trade-off among existing fully transformer-based object detectors, and its extended ViDT+ achieves 53.2AP owing to its high scalability for large models. The source code and trained models are available at https://github.com/naver-ai/vidt.) <|cite_end|>. The integration of Vision and Detection Transformers (ViDT) has further enhanced the efficiency and effectiveness of object detection, with ViDT+ achieving high scalability for large models <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|>. Carion et al present a novel approach named DEtection TRansformer (DETR), which conceptualizes object detection as a direct set prediction problem. DETR simplifies the detection pipeline by eliminating the necessity for hand-designed components such as non-maximum suppression or anchor generation <|cite_start|> (Reference: Object Detection Using Scale Invariant Feature Transform: ) <|cite_end|>. Shou et al used the MS Transformer model to enhance object detection in medical images by addressing challenges such as low resolution, high noise, and small object size <|cite_start|> (Reference: Object detection in medical images based on hierarchical transformer and mask mechanism: The object detection task in the medical field is challenging in terms of classification and regression. Due to its crucial applications in computer-aided diagnosis and computer-aided detection techniques, an increasing number of researchers are transferring the object detection techniques to the medical field. However, in existing work on object detection, researchers do not consider the low resolution of medical images, the high amount of noise, and the small size of the objects to be detected. Based on this, this paper proposes a new algorithmic model called the MS Transformer, where a self-supervised learning approach is used to perform a random mask on the input image to reconstruct the input features, learn a richer feature vector, and filter out excessive noise. To focus the model on the small objects that are being detected, the hierarchical transformer model is introduced in this paper, and a sliding window with a local self-attention mechanism is used to give a higher attention score to the small objects to be detected. Finally, a single-stage object detection framework is used to predict the sequence of sets at the location of the bounding box and the class of objects to be detected. On the DeepLesion and BCDD benchmark dataset, the model proposed in this paper achieves better performance improvement on multiple evaluation metric categories.) <|cite_end|>. It surpasses existing methods on benchmark datasets like DeepLesion and BCDD, demonstrating superior performance in medical image analysis. extend ViDT to ViDT+ to facilitate joint-task learning for object detection and instance segmentation <|cite_start|> (Reference: Object detection in medical images based on hierarchical transformer and mask mechanism: The object detection task in the medical field is challenging in terms of classification and regression. Due to its crucial applications in computer-aided diagnosis and computer-aided detection techniques, an increasing number of researchers are transferring the object detection techniques to the medical field. However, in existing work on object detection, researchers do not consider the low resolution of medical images, the high amount of noise, and the small size of the objects to be detected. Based on this, this paper proposes a new algorithmic model called the MS Transformer, where a self-supervised learning approach is used to perform a random mask on the input image to reconstruct the input features, learn a richer feature vector, and filter out excessive noise. To focus the model on the small objects that are being detected, the hierarchical transformer model is introduced in this paper, and a sliding window with a local self-attention mechanism is used to give a higher attention score to the small objects to be detected. Finally, a single-stage object detection framework is used to predict the sequence of sets at the location of the bounding box and the class of objects to be detected. On the DeepLesion and BCDD benchmark dataset, the model proposed in this paper achieves better performance improvement on multiple evaluation metric categories.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Traffic Congestion Detection from Surveillance Videos using Deep Learning: Countless cameras, both public and private, have been installed in recent years for the objectives of surveillance, the monitoring of anomalous human activities, and traffic surveillance. Numerous worrisome and aberrant actions, such as theft, aggression, and accidents, make it difficult to notice and recognise such behaviour in a real-world setting. The topic of this study is car wrecks as depicted in online videos of traffic. Modern traffic monitoring and surveillance rely heavily on video traffic surveillance cameras (VTSS). Consequences of a rapidly expanding human population include a higher frequency of accidental injuries. The VTSS is employed to identify unusual occurrences on various roads and highways, such as traffic congestion and car accidents. When accidents happen on lengthy roadways or in remote areas, victims are often powerless and some don't make it. The purpose of this study is to provide a method for automatically identifying incidents in surveillance footage. Convolutional-neural-networks (CNNs), a specific deep learning approach developed to cope with grid-like data, have been shown to be useful in image and video processing, according to a study of the relevant literature. This study use a rolling prediction method and convolutional neural networks (CNNs) to detect accidents in VTSS footage. A dataset of anomalous photographs, called the Vehicle Accident Image Dataset (VAID), was created and used in the training of the CNN model. The proposed method was put through its paces by analysing data gathered from running the trained CNN model on a number of different films. This study's findings demonstrate a 93% success rate in identifying traffic accident incidents in films from traffic surveillance systems. <|reference_end|>",
"<|reference_start|> Video Anomaly Detection for Pedestrian Surveillance: <|reference_end|>",
"<|reference_start|> Traffic object detection and recognition based on the attentional visual field of drivers: Traffic object detection and recognition systems play an essential role in Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV). In this research, we focus on four important classes of traffic objects: traffic signs, road vehicles, pedestrians, and traffic lights. We first review the major traditional machine learning and deep learning methods that have been used in the literature to detect and recognize these objects. We provide a vision-based framework that detects and recognizes traffic objects inside and outside the attentional visual area of drivers. This approach uses the driver 3D absolute coordinates of the gaze point obtained by the combined, cross-calibrated use of a front-view stereo imaging system and a non-contact 3D gaze tracker. A combination of multi-scale HOG-SVM and Faster R-CNN-based models are utilized in the detection stage. The recognition stage is performed with a ResNet-101 network to verify sets of generated hypotheses. We applied our approach on real data collected during drives in an urban environment with the RoadLAB instrumented vehicle. Our framework achieved 91% of correct object detections and provided promising results in the object recognition stage. <|reference_end|>",
"<|reference_start|> An Extendable, Efficient and Effective Transformer-based Object Detector: Transformers have been widely used in numerous vision problems especially for visual recognition and detection. Detection transformers are the first fully end-to-end learning systems for object detection, while vision transformers are the first fully transformer-based architecture for image classification. In this paper, we integrate Vision and Detection Transformers (ViDT) to construct an effective and efficient object detector. ViDT introduces a reconfigured attention module to extend the recent Swin Transformer to be a standalone object detector, followed by a computationally efficient transformer decoder that exploits multi-scale features and auxiliary techniques essential to boost the detection performance without much increase in computational load. In addition, we extend it to ViDT+ to support joint-task learning for object detection and instance segmentation. Specifically, we attach an efficient multi-scale feature fusion layer and utilize two more auxiliary training losses, IoU-aware loss and token labeling loss. Extensive evaluation results on the Microsoft COCO benchmark dataset demonstrate that ViDT obtains the best AP and latency trade-off among existing fully transformer-based object detectors, and its extended ViDT+ achieves 53.2AP owing to its high scalability for large models. The source code and trained models are available at https://github.com/naver-ai/vidt. <|reference_end|>"
] | [
10,
11,
22,
36
] | {"<|multi_cite_1_1|>": "ss-1862880", "<|multi_cite_1_2|>": "ss-1514297", "<|multi_cite_1_3|>": "arxiv-413975", "<|multi_cite_1_4|>": "ss-1862881", "<|multi_cite_2_1|>": "arxiv-334433", "<|multi_cite_2_2|>": "ss-1862882", "<|multi_cite_2_3|>": "ss-1862883", "<|multi_cite_2_4|>": "ss-687601", "<|cite_3|>": "ss-1862884", "<|multi_cite_4_1|>": "ss-1287059", "<|multi_cite_4_2|>": "ss-1862885", "<|multi_cite_4_3|>": "ss-1862886", "<|cite_5|>": "arxiv-509877", "<|cite_6|>": "arxiv-509877", "<|cite_7|>": "ss-1862887", "<|cite_8|>": "arxiv-509877", "<|cite_9|>": "arxiv-509877", "<|cite_10|>": "arxiv-413975", "<|cite_11|>": "arxiv-553032", "<|multi_cite_12_1|>": "arxiv-605718", "<|multi_cite_12_2|>": "ss-2514490", "<|multi_cite_12_3|>": "ss-1862888", "<|cite_13|>": "ss-2514490", "<|cite_14|>": "ss-1862888", "<|cite_15|>": "ss-2015144", "<|cite_16|>": "ss-1862889", "<|cite_17|>": "ss-1862890", "<|cite_18|>": "ss-1862891", "<|cite_19|>": "ss-2484895", "<|multi_cite_20_1|>": "arxiv-79041", "<|multi_cite_20_2|>": "arxiv-497613", "<|multi_cite_20_3|>": "arxiv-413915", "<|multi_cite_20_4|>": "arxiv-498044", "<|multi_cite_20_5|>": "arxiv-498046", "<|multi_cite_20_6|>": "arxiv-497616", "<|cite_21|>": "ss-1862892", "<|cite_22|>": "arxiv-413638", "<|cite_23|>": "arxiv-79041", "<|cite_24|>": "ss-1862893", "<|cite_25|>": "ss-743150", "<|cite_26|>": "ss-743150"} |
2307.09989 | <|paper_start|> Title: UniMatch: A Unified User-Item Matching Framework for the Multi-purpose Merchant Marketing
Abstract: UniMatch: A Unified User-Item Matching Framework for the Multi-purpose Merchant Marketing: When doing private domain marketing with cloud services, the merchants usually have to purchase different machine learning models for the multiple marketing purposes, leading to a very high cost. We present a unified user-item matching framework to simultaneously conduct item recommendation and user targeting with just one model. We empirically demonstrate that the above concurrent modeling is viable via modeling the user-item interaction matrix with the multinomial distribution, and propose a bidirectional bias-corrected NCE loss for the implementation. The proposed loss function guides the model to learn the user-item joint probability $p(u,i)$ instead of the conditional probability $p(i|u)$ or $p(u|i)$ through correcting both the users and items' biases caused by the in-batch negative sampling. In addition, our framework is model-agnostic enabling a flexible adaptation of different model architectures. Extensive experiments demonstrate that our framework results in significant performance gains in comparison with the state-of-the-art methods, with greatly reduced cost on computing resources and daily maintenance.
Introduction
\label{sec:intro}
Nowadays, merchants commonly sell their products in multiple channels, such as the public platforms like Amazon, Alibaba, and the private channels like their own websites, offline shops, and exclusive customer groups on social medias like Wechat, etc.
The marketing on those public platforms, managed by the e-commerce companies, has reached a limit in recent years.
As a result, merchants are paying more attention to operate their businesses via the private channels, \ie, conducting the private domain marketing.
In order to manage businesses more effectively, merchants utilize the cloud services like Amazon Web Services and Alibaba Cloud, to link all the private channels.
The cloud services not only manage data for merchants, but also provide machine learning techniques for the intelligent marketing.
There are two common marketing directions of the merchants: the item recommendation (IR) <|cite_start|> (Reference: Introduction to Recommender Systems Handbook: ) <|cite_end|> and the user targeting (UT).
To be more specific, merchants try to keep their high-value users active and loyal by periodically sending them messages or emails with recommended items.
Meanwhile, merchants always look forward to discovering the potential buyers for certain items, \eg, new releases or popular products, etc. Then, they can send personalized promotion content to those targeted users.
Owing to the machine learning techniques, both item recommendation and user targeting contribute to the profit of merchants significantly.
However, merchants have to purchase a handful of machine learning models for different marketing purposes.
First, the item recommendation usually requires one model.
Then, the user targeting usually requires more than one model, because practitioners need to create multiple targeting lists according to different promotion subjects, \eg, popular products or bundles of items.
It takes great efforts to conduct feature engineering, model training and inference for each model.
These practices push up the cost dramatically.
This paper proposes a unified user-item matching framework, named \emph{UniMatch}, to serve for the item recommendation and user targeting with one model only.
The previous recommendation algorithms utilize the conditional probability $p(i|u)$ as the modeling objective <|cite_start|> (Reference: Deep neural Networks for Youtube Recommendations: YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.) <|cite_end|> <|cite_start|> (Reference: Multi-Interest Network with Dynamic Routing for Recommendation at Tmall: Industrial recommender systems usually consist of the matching stage and the ranking stage, in order to handle the billion-scale of users and items. The matching stage retrieves candidate items relevant to user interests, while the ranking stage sorts candidate items by user interests. Thus, the most critical ability is to model and represent user interests for either stage. Most of the existing deep learning-based models represent one user as a single vector which is insufficient to capture the varying nature of user's interests. In this paper, we approach this problem from a different view, to represent one user with multiple vectors encoding the different aspects of the user's interests. We propose the Multi-Interest Network with Dynamic routing (MIND) for dealing with user's diverse interests in the matching stage. Specifically, we design a multi-interest extractor layer based on capsule routing mechanism, which is applicable for clustering historical behaviors and extracting diverse interests. Furthermore, we develop a technique named label-aware attention to help learn a user representation with multiple vectors. Through extensive experiments on several public benchmarks and one large-scale industrial dataset from Tmall, we demonstrate that MIND can achieve superior performance than state-of-the-art methods for recommendation. Currently, MIND has been deployed for handling major online traffic at the homepage on Mobile Tmall App.) <|cite_end|>, while the user targeting models are commonly optimized via the objective $p(u|i)$.
In our UniMatch framework, the modeling objective is the joint probability $p(u,i)$, which is implemented with a bidirectional bias-corrected NCE loss, named \emph{bbcNCE}.
When applied for the item recommendation, $p(u,i)=p(i|u)p(u)$ will produce a similar item list compared to $p(i|u)$ given a specific user.
The same logic holds for the user targeting as well.
Thus, our framework is able to reduce the cost of computing resources and data storage, and relieve the burden of the daily maintenance as well.
Different from the online recommendation on the e-commerce platforms, merchants usually apply these intelligent marketing models less frequently when doing private domain marketing.
For instance, they send promotion emails or personalized messages weekly or even longer.
To adapt for this specific scenario, both the potential user and recommended item lists are produced under a next-$n$-day prediction setting.
Conventionally, both the item recommendation and user targeting tasks are solved by modeling the Bernoulli or multinomial distribution on the user-item interaction matrix.
In this paper, we first theoretically prove that it is equivalent to model with the Bernoulli and multinomial distribution since they converge to the same optima in practice.
Then, we uncover that modeling with the multinomial distribution has better efficiency in terms of the data preparation and model convergence.
Therefore, we follow our discovery and propose a bidirectional NCE loss with bias correction to model the user-item joint probability $p(u,i)$.
Additionally, our framework adopts a classical two-tower architecture which enables a flexible utilization of different models.
Our framework has been implemented in the Alibaba cloud product, \emph{QuickAudience(QA)}\footnote{\url{https://help.aliyun.com/document_detail/136924.html}}, for the intelligent marketing of the merchants. Our contributions are summarized as follows:
\begin{itemize}
\item We present a unified user-item matching framework, \emph{UniMatch}, which trains only one model to serve both the item recommendation and user targeting simultaneously. To the best of our knowledge, this is the first work on the topic.
\item We theoretically prove the equivalence between modeling the user-item interaction matrix with the Bernoulli and multinomial distributions, and empirically demonstrate that modeling with the multinomial distribution yields more robust results with much less resources.
\item We propose a bidirectional bias-corrected NCE loss, \emph{bbcNCE}, and train models with the joint probability of $u$ and $i$ being the learning objective in theory. Also, we empirically show that the bbcNCE loss will guide the model to learn the joint distribution.
\item Extensive experiments on two public datasets and two real-world datasets demonstrate that the proposed framework consistently yields improved performance, in comparison with the state-of-the-art methods on both item recommendation and user targeting tasks. In addition, our framework saves up to 94\%+ of the total cost compared to previous practices.
\end{itemize}
Related Work
\label{sec:related}
\subsection{Item Recommendation}
Item recommendation (IR) have been studied in both academia and industry for decades. The collaborative filtering (CF) and its variants are widely adopted in the early years <|cite_start|> (Reference: A Survey of Collaborative Filtering Techniques: As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.) <|cite_end|>.
Later its descendant, the matrix factorization (MF), is proposed to solve the problem more elegantly with higher accuracy.
Then, the Probabilistic Matrix Factorization (PMF) <|cite_start|> (Reference: Probabilistic {Matrix} {Factorization}: Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system.) <|cite_end|> builds a solid theoretic foundation for the MF models based on the probability theory, \ie, PMF models $s$ with Gaussian distributions. Later, the Bernoulli distribution is shown to be superior in modeling $s$ <|cite_start|> (Reference: Neural Collaborative Filtering: In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.) <|cite_end|>.
In recent years, the neural networks have become a significant component for the recommendation algorithms <|cite_start|> (Reference: Neural Collaborative Filtering: In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.) <|cite_end|> <|cite_start|> (Reference: {Neural collaborative filtering vs. matrix factorization revisited: Embedding based models have been the state of the art in collaborative filtering for over a decade. Traditionally, the dot product or higher order equivalents have been used to combine two or more embeddings, e.g., most notably in matrix factorization. In recent years, it was suggested to replace the dot product with a learned similarity e.g. using a multilayer perceptron (MLP). This approach is often referred to as neural collaborative filtering (NCF). In this work, we revisit the experiments of the NCF paper that popularized learned similarities using MLPs. First, we show that with a proper hyperparameter selection, a simple dot product substantially outperforms the proposed learned similarities. Second, while a MLP can in theory approximate any function, we show that it is non-trivial to learn a dot product with an MLP. Finally, we discuss practical issues that arise when applying MLP based similarities and show that MLPs are too costly to use for item recommendation in production environments while dot products allow to apply very efficient retrieval algorithms. We conclude that MLPs should be used with care as embedding combiner and that dot products might be a better default choice.) <|cite_end|>, and contributed greatly for the recommender systems in industry.
There are two common stages in an large-scale industrial recommendation application, \ie, the candidate generation stage and the ranking stage.
The former stage is usually formulated as a multi-class classification problem to quickly select a small set of item candidates from a vast number of items <|cite_start|> (Reference: Deep neural Networks for Youtube Recommendations: YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.) <|cite_end|> <|cite_start|> (Reference: Multi-Interest Network with Dynamic Routing for Recommendation at Tmall: Industrial recommender systems usually consist of the matching stage and the ranking stage, in order to handle the billion-scale of users and items. The matching stage retrieves candidate items relevant to user interests, while the ranking stage sorts candidate items by user interests. Thus, the most critical ability is to model and represent user interests for either stage. Most of the existing deep learning-based models represent one user as a single vector which is insufficient to capture the varying nature of user's interests. In this paper, we approach this problem from a different view, to represent one user with multiple vectors encoding the different aspects of the user's interests. We propose the Multi-Interest Network with Dynamic routing (MIND) for dealing with user's diverse interests in the matching stage. Specifically, we design a multi-interest extractor layer based on capsule routing mechanism, which is applicable for clustering historical behaviors and extracting diverse interests. Furthermore, we develop a technique named label-aware attention to help learn a user representation with multiple vectors. Through extensive experiments on several public benchmarks and one large-scale industrial dataset from Tmall, we demonstrate that MIND can achieve superior performance than state-of-the-art methods for recommendation. Currently, MIND has been deployed for handling major online traffic at the homepage on Mobile Tmall App.) <|cite_end|> <|cite_start|> (Reference: Controllable Multi-Interest Framework for Recommendation: Recently, neural networks have been widely used in e-commerce recommender systems, owing to the rapid development of deep learning. We formalize the recommender system as a sequential recommendation problem, intending to predict the next items that the user might be interacted with. Recent works usually give an overall embedding from a user's behavior sequence. However, a unified user embedding cannot reflect the user's multiple interests during a period. In this paper, we propose a novel controllable multi-interest framework for the sequential recommendation, called ComiRec. Our multi-interest module captures multiple interests from user behavior sequences, which can be exploited for retrieving candidate items from the large-scale item pool. These items are then fed into an aggregation module to obtain the overall recommendation. The aggregation module leverages a controllable factor to balance the recommendation accuracy and diversity. We conduct experiments for the sequential recommendation on two real-world datasets, Amazon and Taobao. Experimental results demonstrate that our framework achieves significant improvements over state-of-the-art models. Our framework has also been successfully deployed on the offline Alibaba distributed cloud platform.) <|cite_end|>.
In the ranking stage, the problem is formulated as a binary classification to rank all the selected candidates <|cite_start|> (Reference: Wide & Deep Learning for Recommender Systems: Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.) <|cite_end|> <|cite_start|> (Reference: Deep Interest Network for Click-Through Rate Prediction: Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.) <|cite_end|> <|cite_start|> (Reference: Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks: Tasks such as search and recommendation have become increas- ingly important for E-commerce to deal with the information over- load problem. To meet the diverse needs of di erent users, person- alization plays an important role. In many large portals such as Taobao and Amazon, there are a bunch of di erent types of search and recommendation tasks operating simultaneously for person- alization. However, most of current techniques address each task separately. This is suboptimal as no information about users shared across di erent tasks. In this work, we propose to learn universal user representations across multiple tasks for more e ective personalization. In partic- ular, user behavior sequences (e.g., click, bookmark or purchase of products) are modeled by LSTM and attention mechanism by integrating all the corresponding content, behavior and temporal information. User representations are shared and learned in an end-to-end setting across multiple tasks. Bene ting from better information utilization of multiple tasks, the user representations are more e ective to re ect their interests and are more general to be transferred to new tasks. We refer this work as Deep User Perception Network (DUPN) and conduct an extensive set of o ine and online experiments. Across all tested ve di erent tasks, our DUPN consistently achieves better results by giving more e ective user representations. Moreover, we deploy DUPN in large scale operational tasks in Taobao. Detailed implementations, e.g., incre- mental model updating, are also provided to address the practical issues for the real world applications.) <|cite_end|>.
Although not directly declared in many of these research, the underlying probability theory for the candidate generation stage is to model $\vs_r$ with the multinomial distribution <|cite_start|> (Reference: Variational Autoencoders for Collaborative Filtering: We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.) <|cite_end|>, and is to model $s$ with the Bernoulli distribution for the ranking stage <|cite_start|> (Reference: Neural Collaborative Filtering: In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.) <|cite_end|>.
In the candidate generation stage, the huge number of items causes problems on calculating the partition function of the loss during the optimisation (as in Eq. \ref{eq:u2i-loss}). The sampled softmax (SSM) loss <|cite_start|> (Reference: On Using Very Large Target Vocabulary for Neural Machine Translation: Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method that allows us to use a very large target vocabulary without increasing training complexity, based on importance sampling. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English->German translation and almost as high performance as state-of-the-art English->French translation system.) <|cite_end|> is widely employed to solve the problem <|cite_start|> (Reference: Deep neural Networks for Youtube Recommendations: YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.) <|cite_end|> <|cite_start|> (Reference: Multi-Interest Network with Dynamic Routing for Recommendation at Tmall: Industrial recommender systems usually consist of the matching stage and the ranking stage, in order to handle the billion-scale of users and items. The matching stage retrieves candidate items relevant to user interests, while the ranking stage sorts candidate items by user interests. Thus, the most critical ability is to model and represent user interests for either stage. Most of the existing deep learning-based models represent one user as a single vector which is insufficient to capture the varying nature of user's interests. In this paper, we approach this problem from a different view, to represent one user with multiple vectors encoding the different aspects of the user's interests. We propose the Multi-Interest Network with Dynamic routing (MIND) for dealing with user's diverse interests in the matching stage. Specifically, we design a multi-interest extractor layer based on capsule routing mechanism, which is applicable for clustering historical behaviors and extracting diverse interests. Furthermore, we develop a technique named label-aware attention to help learn a user representation with multiple vectors. Through extensive experiments on several public benchmarks and one large-scale industrial dataset from Tmall, we demonstrate that MIND can achieve superior performance than state-of-the-art methods for recommendation. Currently, MIND has been deployed for handling major online traffic at the homepage on Mobile Tmall App.) <|cite_end|>. Recently, the InfoNCE loss is exploited in item recommendation to suppress popular items during candidate generation <|cite_start|> (Reference: Contrastive Learning for Debiased Candidate Generation in Large-Scale Recommender Systems: Deep candidate generation (DCG) that narrows down the collection of relevant items from billions to hundreds via representation learning has become prevalent in industrial recommender systems. Standard approaches approximate maximum likelihood estimation (MLE) through sampling for better scalability and address the problem of DCG in a way similar to language modeling. However, live recommender systems face severe exposure bias and have a vocabulary several orders of magnitude larger than that of natural language, implying that MLE will preserve and even exacerbate the exposure bias in the long run in order to faithfully fit the observed samples. In this paper, we theoretically prove that a popular choice of contrastive loss is equivalent to reducing the exposure bias via inverse propensity weighting, which provides a new perspective for understanding the effectiveness of contrastive learning. Based on the theoretical discovery, we design CLRec, a contrastive learning method to improve DCG in terms of fairness, effectiveness and efficiency in recommender systems with extremely large candidate size. We further improve upon CLRec and propose Multi-CLRec, for accurate multi-intention aware bias reduction. Our methods have been successfully deployed in Taobao, where at least four-month online A/B tests and offline analyses demonstrate its substantial improvements, including a dramatic reduction in the Matthew effect.) <|cite_end|>.
\subsection{User Targeting}
User targeting (UT) mines the potential users for given items.
The \emph{item} could be anything that users can interact with, \eg, an insurance product <|cite_start|> (Reference: An intelligent system for customer targeting: a data mining approach: ) <|cite_end|>, a company/business <|cite_start|> (Reference: Democrats, republicans and starbucks afficionados: user classification in twitter: More and more technologies are taking advantage of the explosion of social media (Web search, content recommendation services, marketing, ad targeting, etc.). This paper focuses on the problem of automatically constructing user profiles, which can significantly benefit such technologies. We describe a general and robust machine learning framework for large-scale classification of social media users according to dimensions of interest. We report encouraging experimental results on 3 tasks with different characteristics: political affiliation detection, ethnicity identification and detecting affinity for a particular business.) <|cite_end|> <|cite_start|> (Reference: A Simple Integration of Social Relationship and Text Data for Identifying Potential Customers in Microblogging: ) <|cite_end|> <|cite_start|> (Reference: Ranking of high-value social audiences on Twitter: ) <|cite_end|>, a specific message (e.g., tweets) on social medias <|cite_start|> (Reference: Locating targets through mention in Twitter: ) <|cite_end|> <|cite_start|> (Reference: Mention recommendation in Twitter with cooperative multi-agent reinforcement learning: In Twitter-like social networking services, the "@'' symbol can be used with the tweet to mention users whom the user wants to alert regarding the message. An automatic suggestion to the user of a small list of candidate names can improve communication efficiency. Previous work usually used several most recent tweets or randomly select historical tweets to make an inference about this preferred list of names. However, because there are too many historical tweets by users and a wide variety of content types, the use of several tweets cannot guarantee the desired results. In this work, we propose the use of a novel cooperative multi-agent approach to mention recommendation, which incorporates dozens of more historical tweets than earlier approaches. The proposed method can effectively select a small set of historical tweets and cooperatively extract relevant indicator tweets from both the user and mentioned users. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods.) <|cite_end|> and even another user <|cite_start|> (Reference: People Recommendation on Social Media: ) <|cite_end|>, etc.
UT is usually formulated as a binary classification problem, and solved with models like SVM, LR and neural networks, etc <|cite_start|> (Reference: {Applied logistic regression: As a consultant, I am always on the lookout for new books that help me do my job better. Iwould recommend practitioners of regression, that is, probably most of us, to read and use this book. Anthony Atkinson and Marco Riani develop a novel methodology for examining the effect each observation has on the tted regression model. Robust tting procedures are combined with regression diagnostics, graphics, and a “forward” processing through the observations to provide a new way of identifying in uential and/or outlier observations while simultaneously determining the best tting model. The method is initially introduced for simple linear regression, but individual chapters are devoted to applying the methodology to nonlinear models in general and generalized linear models in particular. The role of data transformations to normality is also explored. Throughout the book, a large number of fully worked examples provide the reader real insight to the power of the methodology. Theory is kept to a minimum and matrix notation is used throughout. Chapter 1 presents some regression examples that illustrate the need for a methodology to identify and account for outliers in simple and multiple regression. Chapter 2 starts with an appropriately short introduction to least squares estimation and associated hypothesis testing. The authors next derive many of the more common in uence diagnostics and introduce a “mean shift outlier” model wherein a dummy variable is used in the linear model to assess the impact of a speci c observation on parameter estimation and model t. This tool for examining the effect of a single deletion is then placed in the context of a “forward search” algorithm. The forward search algorithm is made up of three steps. The rst step addresses the choice of an initial subset of the observations that will be used with a robust estimation procedure (least medians of squares or LMS) to provide an initial model t and a “good” estimate of the residual error. If the model contains p parameters, the initial subset will be the one subset out of the nCp potential subsets that minimizes the sum of squared residuals from the LMS t. If there are too many potential subsets, the choice will be made after examining a large sample of subsets of size p. In the second step, the size of the subset is incrementally increased; at each increase the subset with the smallest sum of squared residuals is kept. Typically this only requires one new observation to enter the tting set, but there may be cases where some observations drop out and are replaced by others not originally in the subset. Note that as the subset size increases, those observations outside the tting subset look less and less like outliers. What is important about this approach is that it starts with a subset that is assumed to be outlier free, or it contains unmasked outliers that will be replaced as the subset size increases. This second step is repeated until all observations are included in the tting subset. The third step of the algorithm is monitoring changes in t statistics, speci cally the residual mean square, parameter estimates, associated t-statistics, and even diagnostics, such as Cook’s distance, as observations are incrementally added to the tting subset. Typically the residual mean square will increase as subset size increases, but the increase will be smooth. Outliers, being added at the end of the Step 2 process, will tend to produce dramatic changes in these t statistics, hence making identi cation easy. For smaller samples it is even possible to monitor the value of all residuals from the t of the current model. Large residual values for observations not in the current model are indicative of potential outliers. The remainder of the book is devoted to the application and further development of this basic methodology. In Chapter 3, the algorithm is applied to a number of fairly common multiple regression problems to illustrate the power of the algorithm. Chapter 4 addresses the impact of normality transformations on outlier detection. Chapters 5 and 6 extend the methodology to nonlinear least squares and generalized linear models respectively. The authors, in conjunction with Kjell Konis of StatSci (now Insightful), have provided a web site (http://stat.econ.unipr.it/riani/ar) that contains S-Plus functions enabling the user to implement the methodology presented in the book. The functions do the analysis as claimed and are as easy to use as are most S-Plus functions. This is certainly a tool I plan to use extensively in the future.) <|cite_end|> <|cite_start|> (Reference: LibSVM: A library for support vector machines: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.) <|cite_end|> <|cite_start|> (Reference: XGBoost: A Scalable Tree Boosting System: Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.) <|cite_end|>.
In an e-commerce company, the \emph{item} could be a product, a brand, a product category, and a merchant, etc. The number of the \emph{items} ranges from thousands to hundreds of millions. It is impractical to model each item respectively, so we commonly model the items all together via binary classification like <|cite_start|> (Reference: {Neural collaborative filtering vs. matrix factorization revisited: Embedding based models have been the state of the art in collaborative filtering for over a decade. Traditionally, the dot product or higher order equivalents have been used to combine two or more embeddings, e.g., most notably in matrix factorization. In recent years, it was suggested to replace the dot product with a learned similarity e.g. using a multilayer perceptron (MLP). This approach is often referred to as neural collaborative filtering (NCF). In this work, we revisit the experiments of the NCF paper that popularized learned similarities using MLPs. First, we show that with a proper hyperparameter selection, a simple dot product substantially outperforms the proposed learned similarities. Second, while a MLP can in theory approximate any function, we show that it is non-trivial to learn a dot product with an MLP. Finally, we discuss practical issues that arise when applying MLP based similarities and show that MLPs are too costly to use for item recommendation in production environments while dot products allow to apply very efficient retrieval algorithms. We conclude that MLPs should be used with care as embedding combiner and that dot products might be a better default choice.) <|cite_end|>.
In the above applications, researchers implicitly model $s$ with the Bernoulli distribution, and the negative samples are generated with probability $p_n(u,i) \propto \ptrain(i)$. <|paper_end|> | [
"<|reference_start|> Wide & Deep Learning for Recommender Systems: Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow. <|reference_end|>",
"<|reference_start|> On Using Very Large Target Vocabulary for Neural Machine Translation: Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method that allows us to use a very large target vocabulary without increasing training complexity, based on importance sampling. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English->German translation and almost as high performance as state-of-the-art English->French translation system. <|reference_end|>",
"<|reference_start|> Locating targets through mention in Twitter: <|reference_end|>",
"<|reference_start|> Mention recommendation in Twitter with cooperative multi-agent reinforcement learning: In Twitter-like social networking services, the \"@'' symbol can be used with the tweet to mention users whom the user wants to alert regarding the message. An automatic suggestion to the user of a small list of candidate names can improve communication efficiency. Previous work usually used several most recent tweets or randomly select historical tweets to make an inference about this preferred list of names. However, because there are too many historical tweets by users and a wide variety of content types, the use of several tweets cannot guarantee the desired results. In this work, we propose the use of a novel cooperative multi-agent approach to mention recommendation, which incorporates dozens of more historical tweets than earlier approaches. The proposed method can effectively select a small set of historical tweets and cooperatively extract relevant indicator tweets from both the user and mentioned users. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods. <|reference_end|>"
] | [
11,
16,
24,
25
] | {"<|cite_1|>": "ss-692526", "<|multi_cite_2_1|>": "ss-1221553", "<|multi_cite_2_2|>": "arxiv-200299", "<|cite_3|>": "ss-1004353", "<|cite_5|>": "ss-1062039", "<|cite_6|>": "arxiv-132115", "<|multi_cite_7_1|>": "arxiv-132115", "<|multi_cite_7_2|>": "ss-1178725", "<|multi_cite_8_1|>": "ss-1221553", "<|multi_cite_8_2|>": "arxiv-200299", "<|multi_cite_8_3|>": "arxiv-266450", "<|multi_cite_9_1|>": "arxiv-100906", "<|multi_cite_9_2|>": "arxiv-127373", "<|multi_cite_9_3|>": "arxiv-160222", "<|cite_10|>": "arxiv-148561", "<|cite_11|>": "arxiv-132115", "<|cite_12|>": "arxiv-69743", "<|multi_cite_13_1|>": "ss-1221553", "<|multi_cite_13_2|>": "arxiv-200299", "<|cite_14|>": "arxiv-267770", "<|cite_15|>": "ss-1837870", "<|multi_cite_16_1|>": "ss-1952019", "<|multi_cite_16_2|>": "ss-1837871", "<|multi_cite_16_3|>": "ss-1837872", "<|multi_cite_17_1|>": "ss-1837873", "<|multi_cite_17_2|>": "ss-1837874", "<|cite_18|>": "ss-1837875", "<|multi_cite_19_1|>": "ss-730278", "<|multi_cite_19_2|>": "ss-705900", "<|multi_cite_19_3|>": "arxiv-93650", "<|cite_20|>": "ss-1178725"} |
1810.11143 | <|paper_start|> Title: Smell Pittsburgh: Community-Empowered Mobile Smell Reporting System
Abstract: Smell Pittsburgh: Community-Empowered Mobile Smell Reporting System: Urban air pollution has been linked to various human health considerations, including cardiopulmonary diseases. Communities who suffer from poor air quality often rely on experts to identify pollution sources due to the lack of accessible tools. Taking this into account, we developed Smell Pittsburgh, a system that enables community members to report odors and track where these odors are frequently concentrated. All smell report data are publicly accessible online. These reports are also sent to the local health department and visualized on a map along with air quality data from monitoring stations. This visualization provides a comprehensive overview of the local pollution landscape. Additionally, with these reports and air quality data, we developed a model to predict upcoming smell events and send push notifications to inform communities. Our evaluation of this system demonstrates that engaging residents in documenting their experiences with pollution odors can help identify local air pollution patterns, and can empower communities to advocate for better air quality.
Introduction
\begin{figure*}[t]
\centering
\includegraphics[width=2.1\columnwidth]{fig/UI_smell_pittsburgh}
\caption{The user interface of Smell Pittsburgh. The left image shows the submission console for describing smell characteristics, explaining symptoms, and providing notes for the local health department. The right image shows the visualization of smell reports, sensors, and wind directions.}
\label{fig:UI_smell_pittsburgh}
\end{figure*}
Air pollution has been associated with adverse impacts on human health, including respiratory and cardiovascular diseases <|cite_start|> (Reference: Human health effects of air pollution: Over the past three or four decades, there have been important advances in the understanding of the actions, exposure-response characteristics, and mechanisms of action of many common air pollutants. A multidisciplinary approach using epidemiology, animal toxicology, and controlled human exposure studies has contributed to the database. This review will emphasize studies of humans but will also draw on findings from the other disciplines. Air pollutants have been shown to cause responses ranging from reversible changes in respiratory symptoms and lung function, changes in airway reactivity and inflammation, structural remodeling of pulmonary airways, and impairment of pulmonary host defenses, to increased respiratory morbidity and mortality. Quantitative and qualitative understanding of the effects of a small group of air pollutants has advanced considerably, but the understanding is by no means complete, and the breadth of effects of all air pollutants is only partially understood.) <|cite_end|> <|cite_start|> (Reference: Health effects of fine particulate air pollution: lines that connect: Efforts to understand and mitigate the health effects of particulate matter (PM) air pollution have a rich and interesting history. This review focuses on six substantial lines of research that have been pursued since 1997 that have helped elucidate our understanding about the effects of PM on human health. There has been substantial progress in the evaluation of PM health effects at different time~scales of exposure and in the exploration of the shape of the concentration-response function. There has also been emerging evidence of PM-related cardiovascular health effects and grOWing knOWledge regarding intercon nected general pathophysiological pathways that link PM exposure with cardiopulmonary morbidity and mortality. Despite important gaps in scientific knowledge and con tinued reasons for some skepticism, a comprehensive evaluation of the research findings proVides persuasive evidence that exposure to fine particulate air pollution has adverse effects on cardiopulmonary health. Although much of this research has been motivated by environ mental public health policy, these results have important scientific, medical, and public health implications that are broader than debates over legally mandated air quality 'itandards.) <|cite_end|> <|cite_start|> (Reference: An Association between Air Pollution and Mortality in Six U.S. Cities: BACKGROUND
Recent studies have reported associations between particulate air pollution and daily mortality rates. Population-based, cross-sectional studies of metropolitan areas in the United States have also found associations between particulate air pollution and annual mortality rates, but these studies have been criticized, in part because they did not directly control for cigarette smoking and other health risks.
METHODS
In this prospective cohort study, we estimated the effects of air pollution on mortality, while controlling for individual risk factors. Survival analysis, including Cox proportional-hazards regression modeling, was conducted with data from a 14-to-16-year mortality follow-up of 8111 adults in six U.S. cities.
RESULTS
Mortality rates were most strongly associated with cigarette smoking. After adjusting for smoking and other risk factors, we observed statistically significant and robust associations between air pollution and mortality. The adjusted mortality-rate ratio for the most polluted of the cities as compared with the least polluted was 1.26 (95 percent confidence interval, 1.08 to 1.47). Air pollution was positively associated with death from lung cancer and cardiopulmonary disease but not with death from other causes considered together. Mortality was most strongly associated with air pollution with fine particulates, including sulfates.
CONCLUSIONS
Although the effects of other, unmeasured risk factors cannot be excluded with certainty, these results suggest that fine-particulate air pollution, or a more complex pollution mixture associated with fine particulate matter, contributes to excess mortality in certain U.S. cities.) <|cite_end|> <|cite_start|> (Reference: Preventing disease through healthy environments: a global assessment of the burden of disease from environmental risks: ) <|cite_end|>. Addressing air pollution often involves negotiations between corporations and regulators, who hold power to improve air quality. However, the communities, who are directly affected by the pollution, are rarely influential in policy-making. Their voices typically fail to persuade decision-makers because collecting and presenting reliable evidence to support their arguments is resource-intensive. Forming such evidence requires collecting and analyzing multiple sources of data over a large geographic area and an extended period. This task is challenging due to the requirements of financial resources, organizational networks, and access to technology. Due to the power imbalance and resource inequality, affected residents usually rely on experts in governmental agencies, academic institutions, or non-governmental organizations to analyze and track pollution sources.
A straightforward solution is to empower the affected communities directly. In this research, we demonstrate how citizen science can be used for communities to pool resources and efforts to gather evidence for advocacy. Data-driven evidence, especially when integrated with narratives, is essential for communities to make sense of local environmental issues and take action <|cite_start|> (Reference: Making sense of citizen science: Stories as a hermeneutic resource: ) <|cite_end|>. However, citizen-contributed data is often held in low regard because the information can be unreliable or include errors during data entry. Also, sufficient citizen participation and data transparency are required for the evidence to be influential. For instance, the city involved in this study, Pittsburgh, is one of the ten most polluted cities in the United States <|cite_start|> (Reference: State of the Air report shows mixed results for air pollution: For the second consecutive year, the American Lung Association’s air quality report named Fresno-Madera, California, as the most polluted area for particle pollution.) <|cite_end|>. Currently, Pittsburgh citizens report air quality problems to the local health department via its phone line or website.
Nevertheless, the quality of the gathered data is doubtful. Citizens may not remember the exact time and location that pollution odors occurred. Asking citizens to submit complaints retrospectively is hard for capturing accurate details and prone to errors. Such errors can result in missing or incomplete data that can affect the outcome of statistical analysis to identify pollution sources <|cite_start|> (Reference: Fundamentals of Spatial Data Quality (Geographical Information Systems series): ) <|cite_end|>. Furthermore, the reporting process is not transparent and does not encourage citizens to contribute data. There is no real-time feedback or ways of sharing experiences to forge a sense of community. Without data that adequately represents the community, it is difficult to know if an air pollution problem is at a neighborhood or city-wide scale. This approach is inadequate for data collection and hinders the participation in bringing air quality issues to the attention of regulators and advocating for policy changes.
Because of these challenges, resident-reported smell data did not gain much attention as a critical tool for monitoring air pollution. However, literature has shown that the human olfactory can distinguish more than one trillion odors <|cite_start|> (Reference: Humans can discriminate more than 1 trillion olfactory stimuli: All the Smells of the World How many odorant stimuli can a normal human being discriminate? During psychophysical tests of odor mixture discrimination, Bushdid et al. (p. 1370) were surprised to find that humans can discriminate among more than a trillion different smells. Because the authors reduced the complexity by investigating only mixtures of 10, 20, or 30 components drawn from a collection of 128 odorous molecules, this astonishingly large number is probably the lower limit of the potential number of olfactory stimuli that humans can distinguish. The number of different odor mixtures people can distinguish is several orders of magnitude larger than anticipated. Humans can discriminate several million different colors and almost half a million different tones, but the number of discriminable olfactory stimuli remains unknown. The lay and scientific literature typically claims that humans can discriminate 10,000 odors, but this number has never been empirically validated. We determined the resolution of the human sense of smell by testing the capacity of humans to discriminate odor mixtures with varying numbers of shared components. On the basis of the results of psychophysical testing, we calculated that humans can discriminate at least 1 trillion olfactory stimuli. This is far more than previous estimates of distinguishable olfactory stimuli. It demonstrates that the human olfactory system, with its hundreds of different olfactory receptors, far outperforms the other senses in the number of physically different stimuli it can discriminate.) <|cite_end|> and outperform sensitive measuring equipment in odor detection tasks <|cite_start|> (Reference: The human sense of smell: are we better than we think?: Gordon Shepherd challenges the notion - based on genetic evidence - that olfaction is less well developed in humans as compared to other mammals) <|cite_end|>. Although there have been discussions about the potential of using smell to indicate pollution events and support decision making <|cite_start|> (Reference: Buckets of resistance: Standards and the effectiveness of citizen science: In light of arguments that citizen science has the potential to make environmental knowledge and policy more robust and democratic, this article inquires into the factors that shape the ability of citizen science to actually influence scientists and decision makers. Using the case of community-based air toxics monitoring with ‘‘buckets,’’ it argues that citizen science’s effectiveness is significantly influenced by standards and standardized practices. It demonstrates that, on one hand, standards serve a boundary-bridging function that affords bucket monitoring data a crucial measure of legitimacy among experts. On the other hand, standards simultaneously serve a boundary-policing function, allowing experts to dismiss bucket data as irrelevant to the central project of air quality assessment. The article thus calls attention to standard setting as an important site of intervention for citizen science-based efforts to democratize science and policy.) <|cite_end|> <|cite_start|> (Reference: Opportunities for odor: experiences with smell and implications for technology: Technologies for capturing and generating smell are emerging, and our ability to engineer such technologies and use them in HCI is rapidly developing. Our understanding of how these technologies match the experiences with smell that people have or want to have is surprisingly limited. We therefore investigated the experience of smell and the emotions that accompany it. We collected stories from 439 participants who described personally memorable smell experiences in an online questionnaire. Based on the stories we developed 10 categories of smell experience. We explored the implications of the categories for smell-enhanced technology design by (a) probing participants to envision technologies that match their smell story and (b) having HCI researchers brainstorm technologies using the categories as design stimuli. We discuss how our findings can benefit research on personal memories, momentary and first time experiences, and wellbeing.) <|cite_end|>, no prior works collected long-term smell data at a city-wide scale and studied if these data are useful for air pollution monitoring and community advocacy.
We propose a system, \highlightB{\highlightI{Smell Pittsburgh}} <|cite_start|> (Reference: Smell {Pittsburgh: Urban air pollution has been linked to various human health concerns, including cardiopulmonary diseases. Communities who suffer from poor air quality often rely on experts to identify pollution sources due to the lack of accessible tools. Taking this into account, we developed Smell Pittsburgh, a system that enables community members to report odors and track where these odors are frequently concentrated. All smell report data are publicly accessible online. These reports are also sent to the local health department and visualized on a map along with air quality data from monitoring stations. This visualization provides a comprehensive overview of the local pollution landscape. Additionally, with these reports and air quality data, we developed a model to predict upcoming smell events and send push notifications to inform communities. We also applied regression analysis to identify statistically significant effects of push notifications on user engagement. Our evaluation of this system demonstrates that engaging residents in documenting their experiences with pollution odors can help identify local air pollution patterns and can empower communities to advocate for better air quality. All citizen-contributed smell data are publicly accessible and can be downloaded from https://smellpgh.org.) <|cite_end|>, for citizens to report pollution odors to the local health department with accurate time and GPS location data via smartphones. The system visualizes odor complaints in real-time, which enables residents to confirm their experiences by viewing if others also share similar experiences. Additionally, we present a dataset of smell reports and air quality measurements from nearby monitoring stations over 21 months. We use the dataset to develop a model that predicts upcoming pollution odors and send push notifications to users. We also apply machine learning to identify relationships between smell reports and air quality measurements. Finally, we describe qualitative and quantitative studies for understanding changes in user engagement and motivation. To the best of our knowledge, Smell Pittsburgh is the first system of its kind that demonstrates the potential of collecting and using smell data to form evidence about air quality issues at a city-wide scale. Although stakeholders typically view odor experiences as subjective and noisy, our work shows that smell data is beneficial in identifying urban air pollution patterns and empowering communities to pursue a sustainable environment.
Related Work
This research is rooted in citizen science, which empowers amateurs and professionals to form partnerships and produce scientific knowledge <|cite_start|> (Reference: Next steps for citizen science: Strategic investments and coordination are needed for citizen science to reach its full potential. Around the globe, thousands of research projects are engaging millions of individuals—many of whom are not trained as scientists—in collecting, categorizing, transcribing, or analyzing scientific data. These projects, known as citizen science, cover a breadth of topics from microbiomes to native bees to water quality to galaxies. Most projects obtain or manage scientific information at scales or resolutions unattainable by individual researchers or research teams, whether enrolling thousands of individuals collecting data across several continents, enlisting small armies of participants in categorizing vast quantities of online data, or organizing small groups of volunteers to tackle local problems.) <|cite_end|> <|cite_start|> (Reference: Can citizen science enhance public understanding of science?: Over the past 20 years, thousands of citizen science projects engaging millions of participants in collecting and/or processing data have sprung up around the world. Here we review documented outcomes from four categories of citizen science projects which are defined by the nature of the activities in which their participants engage – Data Collection, Data Processing, Curriculum-based, and Community Science. We find strong evidence that scientific outcomes of citizen science are well documented, particularly for Data Collection and Data Processing projects. We find limited but growing evidence that citizen science projects achieve participant gains in knowledge about science knowledge and process, increase public awareness of the diversity of scientific research, and provide deeper meaning to participants’ hobbies. We also find some evidence that citizen science can contribute positively to social well-being by influencing the questions that are being addressed and by giving people a voice in local environmental decision making. While not all citizen science projects are intended to achieve a greater degree of public understanding of science, social change, or improved science -society relationships, those projects that do require effort and resources in four main categories: (1) project design, (2) outcomes measurement, (3) engagement of new audiences, and (4) new directions for research.) <|cite_end|> <|cite_start|> (Reference: Investing in citizen science can improve natural resource management and environmental protection: both to make informed decisions about investing in citizen) <|cite_end|> <|cite_start|> (Reference: Citizen science terminology matters: Exploring key terms: Much can be at stake depending on the choice of words used to describe citizen science, because terminology impacts how knowledge is developed. Citizen science is a quickly evolving field that is mobilizing people’s involvement in information development, social action and justice, and large-scale information gathering. Currently, a wide variety of terms and expressions are being used to refer to the concept of ‘citizen science’ and its practitioners. Here, we explore these terms to help provide guidance for the future growth of this field. We do this by reviewing the theoretical, historical, geopolitical, and disciplinary context of citizen science terminology; discussing what citizen science is and reviewing related terms; and providing a collection of potential terms and definitions for ‘citizen science’ and people participating in citizen science projects. This collection of terms was generated primarily from the broad knowledge base and on-the-ground experience of the authors, by recognizing the potential issues associated with various terms. While our examples may not be systematic or exhaustive, they are intended to be suggestive and invitational of future consideration. In our collective experience with citizen science projects, no single term is appropriate for all contexts. In a given citizen science project, we suggest that terms should be chosen carefully and their usage explained; direct communication with participants about how terminology affects them and what they would prefer to be called also should occur. We further recommend that a more systematic study of terminology trends in citizen science be conducted.) <|cite_end|>. Historically, there exist both research and community-oriented strategies. Research-oriented citizen science aims to address large-scale research questions which are infeasible for scientists to tackle alone <|cite_start|> (Reference: Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. A CAISE Inquiry Group Report.: ) <|cite_end|> <|cite_start|> (Reference: A new dawn for citizen science.: ) <|cite_end|> <|cite_start|> (Reference: Citizen Science: Can Volunteers Do Real Research?: ABSTRACT Collaborations between scientists and volunteers have the potential to broaden the scope of research and enhance the ability to collect scientific data. Interested members of the public may contribute valuable information as they learn about wildlife in their local communities.) <|cite_end|> <|cite_start|> (Reference: The current state of citizen science as a tool for ecological research and public engagement: Approaches to citizen science – an indispensable means of combining ecological research with environmental education and natural history observation – range from community-based monitoring to the use of the internet to “crowd-source” various scientific tasks, from data collection to discovery. With new tools and mechanisms for engaging learners, citizen science pushes the envelope of what ecologists can achieve, both in expanding the potential for spatial ecology research and in supplementing existing, but localized, research programs. The primary impacts of citizen science are seen in biological studies of global climate change, including analyses of phenology, landscape ecology, and macro-ecology, as well as in sub-disciplines focused on species (rare and invasive), disease, populations, communities, and ecosystems. Citizen science and the resulting ecological data can be viewed as a public good that is generated through increasingly collaborative tools and resources, while supporting public participation in science and Earth stewardship.) <|cite_end|> <|cite_start|> (Reference: Citizen science: Public participation in environmental research: Foreword by Richard Louv Notes on Contributors AcknowledgmentsIntroduction: Why Citizen Science? by Janis L. Dickinson and Rick BonneyPart I. The Practice of Citizen Science1. Overview of Citizen Science by Rick Bonney and Janis L. Dickinson2. Projects and Possibilities: Lessons from Citizen Science ProjectsFrom Backyard Observations to Continent-Wide Trends: Lessons from the First Twenty-Two Years of Project FeederWatch by David N. BonterMonitoring Monarchs: Citizen Science and a Charismatic Insect by Karen S. OberhauserNeighborhood Nestwatch: Mentoring Citizens in the Urban Matrix by Peter P. Marra and Robert ReitsmaProject BudBurst: Citizen Science for All Seasons by Sandra Henderson, Dennis L. Ward, Kirsten K. Meymaris, Paul Alaback, and Kayri Havens3. Using Bioinformatics in Citizen Science by Steve Kelling4. Growing the Base for Citizen Science: Recruiting and Engaging Participants by Miyoko Chu, Patricia Leonard, and Flisa Stevenson5. What Is Our Impact? Toward a Unified Framework for Evaluating Outcomes of Citizen Science Participation by Tina Phillips, Rick Bonney, and Jennifer L. ShirkPart II. Impacts of Citizen Science on Conservation Research6. The Opportunities and Challenges of Citizen Science as a Tool for Ecological Research by Caren B. Cooper, Wesley M. Hochachka, and Andre A. Dhondt7. Widening the Circle of Investigation: The Interface between Citizen Science and Landscape Ecology by Benjamin Zuckerberg and Kevin McGarigal8. Using Data Mining to Discover Biological Patterns in Citizen Science Observations by Daniel Fink and Wesley M. Hochachka9. Developing a Conservation Research Program with Citizen Science by Ralph S. Hames, James D. Lowe, and Kenneth V. Rosenberg10. Citizens, Science, and Environmental Policy: A British Perspective by Jeremy J. D. GreenwoodPart III. Educational, Social, and Behavioral Aspects of Citizen Science11. Cognitive Considerations in the Development of Citizen Science Projects by Rebecca C. Jordan, Joan G. Ehrenfeld, Steven A. Gray, Wesley R. Brooks, David V. Howe, and Cindy E. Hmelo-Silver12. Who Poses the Question? Using Citizen Science to Help K-12 Teachers Meet the Mandate for Inquiry by Nancy M. Trautmann, Jennifer L. Shirk, Jennifer Fee, and Marianne E. Krasny13. A Gateway to Science for All: Celebrate Urban Birds by Karen Purcell, Cecilia Garibay, and Janis L. Dickinson14. Children and Nature: Following the Trail to Environmental Attitudes and Behavior by Nancy M. Wells and Kristi S. Lekies15. Internet-Based Social Networking and Collective Action Models of Citizen Science: Theory Meets Possibility by Heather A. Triezenberg, Barbara A. Knuth, Y. Connie Yuan, and Janis L. Dickinson16. A Role for Citizen Science in Disaster and Conflict Recovery and Resilience by Keith G. Tidball and Marianne E. KrasnyAfterword by John W. FitzpatrickLiterature Cited Index) <|cite_end|> <|cite_start|> (Reference: The history of public participation in ecological research: Members of the public have for centuries recorded their observations of the natural world, including plant and animal distribution and phenology, water quality, weather data, and astronomical phenomena. Given the relatively recent growth of ecological research as a professional field of study, the historical contributions of amateurs to ecology can be easily overlooked. To better understand long-term changes in ecosystems, researchers are now revisiting many of these historical datasets collected by non-professionals. Over the past 100 years, scientific organizations have increasingly included volunteers in large-scale monitoring projects to broaden the geographical extent and sample size of observations. We believe that a renewed interest in citizen science, enriched with the perspective and data provided by the long tradition of public participation in science, will broaden the engagement of the public in ecological research and lead to new scientific insights.) <|cite_end|> <|cite_start|> (Reference: Citizen science: a developing tool for expanding science knowledge and scientific literacy: Citizen science enlists the public in collecting large quantities of data across an array of habitats and locations over long spans of time. Citizen science projects have been remarkably successful in advancing scientific knowledge, and contributions from citizen scientists now provide a vast quantity of data about species occurrence and distribution around the world. Most citizen science projects also strive to help participants learn about the organisms they are observing and to experience the process by which scientific investigations are conducted. Developing and implementing public data-collection projects that yield both scientific and educational outcomes requires significant effort. This article describes the model for building and operating citizen science projects that has evolved at the Cornell Lab of Ornithology over the past two decades. We hope that our model will inform the fields of biodiversity monitoring, biological research, and science education while providing a window into the culture of citizen science.) <|cite_end|> <|cite_start|> (Reference: Citizen science as a tool for conservation in residential ecosystems: Human activities, such as mining, forestry, and agriculture, strongly influence processes in natural systems. Because conservation has focused on managing and protecting wildlands, research has focused on understanding the indirect influence of these human activities on wildlands. Although a conservation focus on wildlands is critically important, the concept of residential area as an ecosystem is relatively new, and little is known about the potential of such areas to contribute to the conservation of biodiversity. As urban sprawl increases, it becomes urgent to construct a method to research and improve the impacts of management strategies for residential landscapes. If the cumulative activities of individual property owners could help conserve biodiversity, then residential matrix management could become a critical piece of the conservation puzzle. "Citizen science" is a method of integrating public outreach and scientific data collection locally, regionally, and across large geographic scales. By involving citizen participants directly in monitoring and active management of residential lands, citizen science can generate powerful matrix management efforts, defying the "tyranny of small decisions" and leading to positive, cumulative, and measurable impacts on biodiversity.) <|cite_end|>. Research questions under this strategy are often driven by professional scientists. Researchers applying this strategy study how scientists can encourage the public to participate in scientific research. In contrast, \highlightB{community-oriented citizen science} aims to democratize science by equipping citizens with tools to directly target community concerns for advocacy <|cite_start|> (Reference: Citizen science: A study of people, expertise and sustainable development: Introduction 1. Science and Citizenship 3. Science, Citizenship and Environmental Threat 4. Witnesses, Participants and Major Accident Hazards 5. Freeing the Voices: A Science of the People 6. Building Sustainable Futures: Science Shops and Social Experiments 7. Science, Citizenship and Troubled Modernity) <|cite_end|> <|cite_start|> (Reference: The public value of science: or how to ensure that science really matters: We need to infuse the culture and practice of science with a new set of social possibilities About Demos Who we are Demos is the think tank for everyday democracy. We believe everyone should be able to make personal choices in their daily lives that contribute to the common good. Our aim is to put this democratic idea into practice by working with organisations in ways that make them more effective and legitimate. What we work on We focus on six areas: public services; science and technology; cities and public space; people and communities; arts and culture; and global security. Who we work with Our partners include policy-makers, companies, public service providers and social entrepreneurs. Demos is not linked to any party but we work with politicians across political divides. Our international network – which extends across Eastern Europe, Scandinavia, Australia, Brazil, India and China – provides a global perspective and enables us to work across borders. How we work Demos knows the importance of learning from experience. We test and improve our ideas in practice by working with people who can make change happen. Our collaborative approach means that our partners share in the creation and ownership of new ideas. What we offer We analyse social and political change, which we connect to innovation and learning in organisations.We help our partners show thought leadership and respond to emerging policy challenges. How we communicate As an independent voice, we can create debates that lead to real change. We use the media, public events, workshops and publications to communicate our ideas. All our books can be downloaded free from the Demos website. Open access. Some rights reserved. As the publisher of this work,Demos has an open access policy which enables anyone to access our content electronically without charge. We want to encourage the circulation of our work as widely as possible without affecting the ownership of the copyright,which remains with the copyright holder. Users are welcome to download,save,perform or distribute this work electronically or in any other format, including in foreign language translation without written permission subject to the conditions set out in the Demos open access licence which you can read at the back of this publication. Please read and consider the full licence.The following are some of the conditions imposed by the licence: ● Demos and the author(s) are credited; ● The Demos website address (www.demos.co.uk) is published together …) <|cite_end|> <|cite_start|> (Reference: Citizen Scientists: Reconnecting Science with Civil Society: ) <|cite_end|> <|cite_start|> (Reference: {Constructing the scientific citizen: science and democracy in the biosciences: The relationship between science policy and public opinion has become a lively topic in the UK - especially with regard to the BSE crisis and genetically modified foods. A number of governmental publications have recently advocated greater public dialogue and engagement. In this general context, the paper explores the configuration of scientific citizenship and of the scientific citizen within policy and consultation processes. Building upon a detailed examination of one important social experiment - the Public Consultation on Developments in the Biosciences - the social construction of both science and public consultation is considered. With particular attention to the framing of issues for public debate, the constitution of audience and the construction of citizenship, the paper argues the need to move beyond mere sloganizing over science and democracy. The discussion concludes with a presentation of competing technologies of community and an assessment of their significance for the future practice of scientific citizenship.) <|cite_end|> <|cite_start|> (Reference: Citizen Science: Enabling Participatory Urbanism Book Chapter for Urban Informatics: Community Integration and Implementation: In this chapter we present an important new shift in mobile phone usage – from communication tool to “networked mobile personal measurement instrument”. We explore how these new “personal instruments” enable an entirely novel and empowering genre of mobile computing usage called citizen science . We investigate how such citizen science can be used collectively across neighborhoods and communities to enable individuals to become active participants and stakeholders as they publicly collect, share, and remix measurements of their city that matter most to them. We further demonstrate the impact of this new participatory urbanism by detailing its usage within the scope of environmental awareness. Inspired by a series of field studies, user driven environmental measurements, and interviews, we present the design of a working hardware system that integrates air quality sensing into an existing mobile phone and exposes the citizen authored measurements to the community – empowering people to become true change agents.) <|cite_end|> <|cite_start|> (Reference: Governance The Politics of Talk: Coming to Terms with the 'New' Scientific: ) <|cite_end|> <|cite_start|> (Reference: Why should we promote public engagement with science?: This introductory essay looks back on the two decades since the journal Public Understanding of Science was launched. Drawing on the invited commentaries in this special issue, we can see narratives of continuity and change around the practice and politics of public engagement with science. Public engagement would seem to be a necessary but insufficient part of opening up science and its governance. Those of us who have been involved in advocating, conducting and evaluating public engagement practice could be accused of over-promising. If we, as social scientists, are going to continue a normative commitment to the idea of public engagement, we should therefore develop new lines of argument and analysis. Our support for the idea of public engagement needs qualifying, as part of a broader, more ambitious interest in the idea of publicly engaged science.) <|cite_end|> <|cite_start|> (Reference: The Promise of Community Citizen Science: ) <|cite_end|> <|cite_start|> (Reference: Designing Interactive Systems for Community Citizen Science: Citizen science forges partnerships between experts and citizens through collaboration and has become a trend in public participation in scientific research over the past decade. Besides this trend, public participation can also contribute to participatory democracy, which empowers citizens to advocate for their local problems. This strategy supports citizens to form a community, increase environmental monitoring, gather evidence, and tell convincing stories. Researchers believe that this “community citizen science” strategy can contribute to the well-being of communities by giving them the power to influence the general public and decision makers. Community citizen science requires collecting, curating, visualizing, analyzing, and interpreting multiple types of data over a large spacetime scale. This is highly dependent on community engagement (i.e., the involvement of citizens in local neighborhoods). Such large-scale tasks require the assistance of innovative computational tools to give technology affordance to communities. However, existing tools often focus on only one type of data, and thus researchers need to develop tools from scratch. Moreover, there is a lack of design patterns for researchers to reference when developing such tools. Furthermore, existing tools are typically treated as products rather than ongoing infrastructures that sustain community engagement. This research studies the methodology of developing computational tools by using visualization, crowdsourcing, and artificial intelligence techniques to support the entire community engagement lifecycle, from initiation, maintenance, to evaluation. This research will make methodological and empirical contributions to community citizen science and sustainable human-computer interaction. Methodological contributions include detailed case studies with applied techniques from information technology systems that are deployed in real-world contexts. Empirical contributions include generalizable empirical insights for developing interactive systems that integrate multiple types of scientific data. In this dissertation, I first define “community citizen science” and explain corresponding design challenges. Then, I review existing computational tools and techniques that are related to this research. Next, I present four interactive systems centered around the research scope: (1) a timelapse editor that supports building evidence-based narratives, (2) an air quality monitoring system that integrates heterogeneous data and computer vision to support the formation of scientific knowledge, (3) a visualization tool that reveals the impact of oil and gas development, and (4) a mobile crowdsourced application for reporting and visualizing pollution odors. Finally, I synthesize findings from all four works into generalizable design implications for future researchers and developers.) <|cite_end|>. Research questions under this strategy are often driven by community members, exploring how scientists can engage in social and ethical issues that are raised by citizens or communities. Our research focuses on the \highlightB{community-oriented} approach. This approach is highly related to sustainable Human-Computer Interaction <|cite_start|> (Reference: Nourishing the Ground for Sustainable HCI: Considerations from Ecologically Engaged Art: Sustainable HCI is now a recognized area of human-computer interaction drawing from a variety of disciplinary approaches, including the arts. How might HCI researchers working on sustainability productively understand the discourses and practices of ecologically engaged art as a means of enriching their own activities? We argue that an understanding of both the history of ecologically engaged art, and the art-historical and critical discourses surrounding it, provide a fruitful entry-point into a more critically aware sustainable HCI. We illustrate this through a consideration of frameworks from the arts, looking specifically at how these frameworks act more as generative devices than prescriptive recipes. Taking artistic influences seriously will require a concomitant rethinking of sustainable HCI standpoints - a potentially useful exercise for HCI research in general.) <|cite_end|> <|cite_start|> (Reference: Mapping the Landscape of Sustainable HCI: With the recent growth in sustainable HCI, now is a good time to map out the approaches being taken and the intellectual commitments that underlie the area, to allow for community discussion about where the field should go. Here, we provide an empirical analysis of how sustainable HCI is defining itself as a research field. Based on a corpus of published works, we identify (1) established genres in the area, (2) key unrecognized intellectual differences, and (3) emerging issues, including urgent avenues for further exploration, opportunities for interdisciplinary engagement, and key topics for debate.) <|cite_end|> <|cite_start|> (Reference: Sustainably Unpersuaded: How Persuasion Narrows Our Vision of Sustainability: In this paper we provide a critical analysis of persuasive sustainability research from 2009-2011. Drawing on critical sociological theory of modernism, we argue that persuasion is based on a limited framing of sustainability, human behavior, and their interrelationship. This makes supporting sustainability easier, but leads to characteristic patterns of breakdown. We then detail problems that emerge from this narrowing of vision, such as how the framing of sustainability as the optimization of a simple metrics places technologies incorrectly as objective arbiters over complex issues of sustainability. We conclude by suggesting alternative approaches to move beyond these problems.) <|cite_end|> <|cite_start|> (Reference: Sustainable Interaction Design: Invention \& Disposal, Renewal \& Reuse: This paper presents the perspective that sustainability can and should be a central focus of interaction design-a perspective that is termed Sustainable Interaction Design (SID). As a starting point for a perspective of sustainability, design is defined as an act of choosing among or informing choices of future ways of being. This perspective of sustainability is presented in terms of design values, methods, and reasoning. The paper proposes (i) a rubric for understanding the material effects of particular interaction design cases in terms of forms of use, reuse, and disposal, and (ii) several principles to guide SID. The paper illustrates--with particular examples of design critique for interactive products and appeals to secondary research--how two of these principles may be applied to move the effects of designs from less preferred forms of use to more preferred ones. Finally, a vision for incorporating sustainability into the research and practice of interaction design is described.) <|cite_end|> <|cite_start|> (Reference: Environmental Sustainability and Interaction: By its nature, the discipline of human computer interaction must take into consideration the issues that are most pertinent to humans. We believe that the CHI community faces an unanswered challenge in the creation of interactive systems: sustainability. For example, climate scientists argue that the most serious consequences of climate change can be averted, but only if fundamental changes are made. The goal of this SIG is to raise awareness of these issues in the CHI community and to start a conversation about the possibilities and responsibilities we have to address issues of sustainability.) <|cite_end|> <|cite_start|> (Reference: HCI and Environmental Sustainability: The Politics of Design and the Design of Politics: Many HCI researchers have recently begun to examine the opportunities to use ICTs to promote environmental sustainability and ecological consciousness on the part of technology users. This paper examines the way that traditional HCI discourse obscures political and cultural contexts of environmental practice that must be part of an effective solution. Research on ecological politics and the political economy of environmentalism highlight some missing elements in contemporary HCI analysis, and suggest some new directions for the relationship between sustainability and HCI. In particular, I propose that questions of scale -- the scales of action and the scales of effects -- might provide a useful new entry point for design practice.) <|cite_end|>, which studies the intervention of information technology for increasing the awareness of sustainability, changing user behaviors, and influencing attitudes of affected communities. We seek to generate scientific knowledge from community data to support citizen-driven exploration, understanding, and dissemination of local air quality concerns.
\subsection{Community Data in Citizen Science}
Modern technology allows communities to collect data that can contextualize and express their concerns. There are typically two types of community data, which are generated from either sensors or proactive human reports. Each type of data provides a small fragment of evidence. When it comes to resolving and revealing community concerns, human-reported data can show how experiences of residents are affected by local issues, but it is typically noisy, ambiguous, and hard to quantify at a consistent scale. Sensing data can complement human-reported data by providing temporally dense and reliable measurements of environmental phenomena but fails to explain how these phenomena affect communities. Without integrating both types of data, it is difficult to understand the context of local concerns and produce convincing evidence.
\subsubsection{Human-Reported Data}
Human-reported data includes observations contributed by users. Modern computational tools can collect volunteered geographic information <|cite_start|> (Reference: Citizen Science and Volunteered Geographic Information: Overview and Typology of Participation: ) <|cite_end|> and aggregate them to produce scientific knowledge. However, most prior works focused on collecting general information of particular interest, rather than data of a particular type of human sense, such as odor. \highlightI{Ushahidi} gathers crisis information via text messages or its website to provide timely transparent information to a broader audience. \highlightI{Creek Watch} is a monitoring system which enabled citizens to report water flow and trash data in creeks <|cite_start|> (Reference: Creek Watch: Pairing Usefulness and Usability for Successful Citizen Science: Citizen science projects can collect a wealth of scientific data, but that data is only helpful if it is actually used. While previous citizen science research has mostly focused on designing effective capture interfaces and incentive mechanisms, in this paper we explore the application of HCI methods to ensure that the data itself is useful. To provide a focus for this exploration we designed and implemented Creek Watch, an iPhone application and website that allow volunteers to report information about waterways in order to aid water management programs. Working with state and local officials and private groups involved in water monitoring, we conducted a series of contextual inquiries to uncover what data they wanted, what data they could immediately use, and how to most effectively deliver that data to them. We iteratively developed the Creek Watch application and website based on our findings and conducted evaluations of it with both contributors and consumers of water data, including scientists at the city water resources department. Our study reveals that the data collected is indeed useful for their existing practices and is already in use in water and trash management programs. Our results suggest the application of HCI methods to design the data for the end users is just as important as their use in designing the user interface.) <|cite_end|>. \highlightI{Sensr} is a tool for creating environmental data collection and management applications on mobile devices without programming skills <|cite_start|> (Reference: Sensr: evaluating a flexible framework for authoring mobile data-collection tools for citizen science: Across HCI and social computing platforms, mobile applications that support citizen science, empowering non-experts to explore, collect, and share data have emerged. While many of these efforts have been successful, it remains difficult to create citizen science applications without extensive programming expertise. To address this concern, we present Sensr, an authoring environment that enables people without programming skills to build mobile data collection and management tools for citizen science. We demonstrate how Sensr allows people without technical skills to create mobile applications. Findings from our case study demonstrate that our system successfully overcomes technical constraints and provides a simple way to create mobile data collection tools.) <|cite_end|> <|cite_start|> (Reference: Exploring Barriers to the Adoption of Mobile Technologies for Volunteer Data Collection Campaigns: Volunteer campaigns for data collection make it possible for non-profit organizations to extend their ability to monitor and respond to critical environmental and societal issues. Yet mobile data collection technologies that have the potential to lower the costs and increase the accuracy of volunteer-collected data are not commonly used in these campaigns. In this paper we conduct a series of studies that reveal the complex issues affecting technology adoption in this domain. First, we surveyed and interviewed existing volunteering campaigns to map out current technology usage within volunteer campaigns. Next, we provided two organizations with a customizable tool for data collection (Sensr) and studied its use and non-use across six real volunteer-driven campaigns over six months. Our study explored success and failure across the first few phases of the campaign lifecycle (campaign creation, initial deployment, and adoption). Our results highlight the impact of resource constraints, cognitive factors, the depth of volunteer engagement, and stakeholders' perspective on technology as important factors contributing to the adoption and usage of mobile data collection technologies. We use these findings to argue for specific design features to accelerate the adoption and use of such tools in volunteer data collection campaigns.) <|cite_end|>. \highlightI{Encyclopedia of Life} is a platform for curating species information contributed by professionals and non-expert volunteers <|cite_start|> (Reference: Supporting content curation communities: The case of the Encyclopedia of Life: This article explores the opportunities and challenges of creating and sustaining large-scale “content curation communities” through an in-depth case study of the Encyclopedia of Life (EOL). Content curation communities are large-scale crowdsourcing endeavors that aim to curate existing content into a single repository, making these communities different from content creation communities such as Wikipedia. In this article, we define content curation communities and provide examples of this increasingly important genre. We then follow by presenting EOL, a compelling example of a content curation community, and describe a case study of EOL based on analysis of interviews, online discussions, and survey data. Our findings are characterized into two broad categories: information integration and social integration. Information integration challenges at EOL include the need to (a) accommodate and validate multiple sources and (b) integrate traditional peer reviewed sources with user-generated, nonpeer-reviewed content. Social integration challenges at EOL include the need to (a) establish the credibility of open-access resources within the scientific community and (b) facilitate collaboration between experts and novices. After identifying the challenges, we discuss the potential strategies EOL and other content curation communities can use to address them, and provide technical, content, and social design recommendations for overcoming them. © 2012 Wiley Periodicals, Inc.) <|cite_end|>. \highlightI{eBird} is a crowdsourcing platform to engage birdwatchers, scientists, and policy-makers to collect and analyze bird data collaboratively <|cite_start|> (Reference: The eBird enterprise: An integrated approach to development and application of citizen science: ) <|cite_end|> <|cite_start|> (Reference: eBird: A citizen-based bird observation network in the biological sciences: ) <|cite_end|>. \highlightI{Tiramisu} was a transit information system for collecting GPS location data and problem reports from bus commuters. One of the few examples focusing on information of a specific modality is \highlightI{NoiseTube}, a mobile application that empowered citizens to report \highlightI{noise} via their mobile phones and mapped urban noise pollution on a geographical heatmap <|cite_start|> (Reference: NoiseTube: Measuring and mapping noise pollution with mobile phones: ) <|cite_end|> <|cite_start|> (Reference: Participatory noise mapping works! An evaluation of participatory sensing as an alternative to standard techniques for environmental monitoring: ) <|cite_end|>. The tool could be utilized for not only understanding the context of urban noise pollution but also measuring short-term or long-term personal exposure.
\subsubsection{Sensing Data}
Sensing data involves environmental measurements quantified with sensing devices or systems, which enable citizens to monitor their surroundings with minimal to no assistance from experts. However, while many prior works used sensors to monitor air pollution, none of them complemented the sensing data with human-reported data. \highlightI{MyPart} is a low-cost and calibrated wearable sensor for measuring and visualizing airborne particles <|cite_start|> (Reference: MyPart: Personal, portable, accurate, airborne particle counting: In 2012, air pollution in both cities and rural areas was estimated to have caused 3.7 million premature deaths, 88% of those in at risk communities. The primary pollutant was small airborne particulate matter of 10 microns or less in diameter which led to the development of cardiovascular and respiratory diseases. In response, we developed MyPart, the first personal, portable, and accurate particle sensor under $50 capable of distinguishing and counting differently sized particles. We demonstrate how MyPart offers substantial enhancements over most existing air particle sensors by simultaneously improving accessibility, flexibility, portability, and accuracy. We describe the evolution and implementation of the sensor design, demonstrate its performance across twenty everyday urban environments versus a calibrated instrument, and conduct a preliminary user study to report on the overall user experience of MyPart. We also present a novel smart-phone visualization interface and a series of simple form factor adaptations of our design.) <|cite_end|>. \highlightI{Speck} is an indoor air quality sensor for measuring and visualizing fine particulate matter <|cite_start|> (Reference: A low-cost particle counter and signal processing method for indoor air pollution: Indoor air quality is closely linked with respiratory and cardiovascular health, prompting a need for affordable home air quality monitors. The newly-developed Speck is a very-low-cost indoor monitor for measuring fine particulate matter using optical sensors and a unique data processing algorithm. In this paper, we examine the performance of the Speck alongside two professional handheld particle counters (one HHPC-6 and one HHPC-6+) in household environments during cooking events and incense burning events. We demonstrater 2 correlation values during the cooking event of greater than 0.98 between each pair of Specks and greater than 0.92 between each Speckand 2 m particlecountsfr om the HHPC6/6+ monitors. The error between the Specks and the HHPC-6+ 2m channel is less than the error between the HHPC-6 and HHPC-6+ 2m channels. The incense test yielded weaker correlation values, possibly due to uneven distribution of the smoke across the test setup. The distribution of particle sizes appears to be approximately the same as that generated from cooking. We conclude from these experiments that the Speck exhibits a strong correlation with professional particle counters, and that the error between the Speck and one professional unit is comparable to or less than the error between two very similar professional units.) <|cite_end|> <|cite_start|> (Reference: Calibration and Characterization of Low-Cost Fine Particulate Monitors and their Effect on Individual Empowerment: Air quality has long been a major health concern for citizens around the world, and increased levels of exposure to fine particulate matter (PM2.5) has been definitively linked to serious health effects such as cardiovascular disease, respiratory illness, and increased mortality. PM2.5 is one of six attainment criteria pollutants used by the EPA, and is similarly regulated by many other governments worldwide. Unfortunately, the high cost and complexity of most current PM2.5 monitors results in a lack of detailed spatial and temporal resolution, which means that concerned individuals have little insight into their personal exposure levels. This is especially true regarding hyper-local variations and short-term pollution events associated with industrial activity, heavy fossil fuel use, or indoor activity such as cooking. Advances in sensor miniaturization, decreased fabrication costs, and rapidly expanding data connectivity have encouraged the development of small, inexpensive devices capable of estimating PM2.5 concentrations. This new class of sensors opens up new possibilities for personal exposure monitoring. It also creates new challenges related to calibrating and characterizing inexpensively manufactured sensors to provide the level of precision and accuracy needed to yield actionable information without significantly increasing device cost. This thesis addresses the following two primary questions: 1. Can an inexpensive air quality monitor based on mass-manufactured dust sensors be calibrated efficiently in order to achieve inter-device agreement in addition to agreement with professional and federally-endorsed particle monitors? 2. Can an inexpensive air quality monitor increase the confidence and capacity of individuals to understand and control their indoor air quality? In the following thesis, we describe the development of the Speck fine particulate monitor. The Speck processes data from a low-cost dust sensor using a Kalman filter with a piecewise sensing model. We have optimized the parameters for the algorithm through short-term co-location tests with professional HHPC-6 particle counters, and verified typical correlations between the Speck and HHPC-6 units of r > 0.90. To account for variations in sensitivity, we have developed a calibration procedure whereby fine particles are aerosolized within an open room or closed calibration chamber. This allows us to produce Specks for commercial distribution as well as the experiments presented herein. Drawing from previous pilot studies, we have distributed low-cost monitors through local library systems and community groups. Pre-deployment and post-deployment surveys characterize user perception of personal exposure and the effect of a low-cost fine particulate monitor on empowerment.) <|cite_end|>. Kim {\em et al.} implemented an indoor air quality monitoring system to gather air quality data from commercial sensors <|cite_start|> (Reference: inAir: a longitudinal study of indoor air quality measurements and visualizations: Indoor air quality (IAQ) is important for health as people spend the majority of time indoors, and it is particularly interesting over outdoor air because it strongly ties to indoor activities. Some activities easily exacerbate IAQ, resulting in serious pollution. However, people may not notice such changes because many pollutants are colorless and odorless, while many activities are inconspicuous and routine. We implemented inAir, a system that measures and visualizes IAQ that households appropriate and integrate into everyday life. The research goals of this work include understanding the IAQ dynamics with respect to habitual behaviors and analyzing behavioral and quantitative changes towards improving IAQ by the use of inAir. From our longitudinal study for four months, we found that inAir successfully elicited the reflection upon, and the modification of habitual behaviors for healthy domestic environments, which resulted in the significant improvement of IAQ.) <|cite_end|>. Kuznetsov {\em et al.} developed multiple air pollution monitoring systems which involved low-cost air quality sensors and a map-based visualization <|cite_start|> (Reference: Ceci n'est pas une pipe bombe: authoring urban landscapes with air quality sensors: Our work explores the convergence between participatory sensing, political activism and public expressions. Unlike prior research, which focuses on personal sensing, we present low-cost, networked air quality sensors, designed to be repositioned across public landscapes by communities of citizen stakeholders. Our GPS-enabled sensors report dust, exhaust, or VOC's (volatile organic compounds), along with temperature, humidity and light levels to a website that visualizes this data in real time. The sensors can be attached to a variety of surfaces serving as research probes to demarcate ('tag') public spaces with environmental concerns. We deploy our fully functional system with four urban communities - parents, bicyclists, homeless and activists, positioning our system as a tool for studying and supporting community togetherness and public activism. Our findings highlight community sharing of the physical sensors and dialogues surrounding the collected data.) <|cite_end|> <|cite_start|> (Reference: A Low-tech Sensing System for Particulate Pollution: We present an ultra low-cost sensing system, which enables participants to see and reflect on the particulates in their air. Drawing on prior work in paper computing, we introduce small sensors for particulate pollution that can be easily assembled from common paper materials for less than $1 USD, and mailed by regular postal service to residents of entire neighborhoods, cities, or geographic regions. Recipients collect particulate samples using these sensors and mail them back to a central location, where the particles are viewed and analyzed via a microscope. The data, which includes rich images of actual air pollution particles, can then be broadcast to larger audiences. This paper details the design of our system and its deployment with a local air quality activist community. We conclude by highlighting the tradeoffs between high-tech and low-tech sensing, and suggest opportunities for tangible interaction to support rich, new ways of seeing our environment.) <|cite_end|>. Insights from these works showed that sensing data, especially accompanied by visualizations, could provide context and evidence that might raise awareness and engage local communities to participate in political activism. But none of these work asked users to report odors, and thus can not directly capture how air pollution affects the living quality of community members.
\subsection{Machine Learning for Citizen Science}
Citizen science data are typically high-dimensional, noisy, potentially correlated, and spatially or temporally sparse. The collected data may also suffer from many types of bias and error that sometimes can even be unavoidable <|cite_start|> (Reference: Statistical solutions for error and bias in global citizen science datasets: ) <|cite_end|>. Making sense of such noisy data has been a significant concern in citizen science <|cite_start|> (Reference: Crowdsourcing Undone Science: Could crowdsourcing be a way to get undone science done? Could grassroots groups enlist volunteers to help make sense of large amounts of otherwise unanalyzed data—an approach that has been gaining popularity among natural scientists? This paper assesses the viability of this technique for creating new knowledge about the local effects of petrochemicals, by examining three recent experiments in crowdsourcing led by non-profits and grassroots groups. These case studies suggest that undertaking a crowdsourcing project requires significant resources, including technological infrastructures that smaller or more informal groups may find it difficult to provide. They also indicate that crowdsourcing will be most successful when the questions of grassroots groups line up fairly well with existing scientific frameworks. The paper concludes that further experimentation in crowdsourcing is warranted, at least in cases where adequate resources and interpretive frameworks are available, and that further investment in technological infrastructures for data analysis is needed.) <|cite_end|> <|cite_start|> (Reference: The future of citizen science: emerging technologies and shifting paradigms: Citizen science creates a nexus between science and education that, when coupled with emerging technologies, expands the frontiers of ecological research and public engagement. Using representative technologies and other examples, we examine the future of citizen science in terms of its research processes, program and participant cultures, and scientific communities. Future citizen-science projects will likely be influenced by sociocultural issues related to new technologies and will continue to face practical programmatic challenges. We foresee networked, open science and the use of online computer/video gaming as important tools to engage non-traditional audiences, and offer recommendations to help prepare project managers for impending challenges. A more formalized citizen-science enterprise, complete with networked organizations, associations, journals, and cyberinfrastructure, will advance scientific research, including ecology, and further public education.) <|cite_end|>, especially for untrained contributors <|cite_start|> (Reference: Citizen Science: Can Volunteers Do Real Research?: ABSTRACT Collaborations between scientists and volunteers have the potential to broaden the scope of research and enhance the ability to collect scientific data. Interested members of the public may contribute valuable information as they learn about wildlife in their local communities.) <|cite_end|> <|cite_start|> (Reference: Buckets of resistance: Standards and the effectiveness of citizen science: In light of arguments that citizen science has the potential to make environmental knowledge and policy more robust and democratic, this article inquires into the factors that shape the ability of citizen science to actually influence scientists and decision makers. Using the case of community-based air toxics monitoring with ‘‘buckets,’’ it argues that citizen science’s effectiveness is significantly influenced by standards and standardized practices. It demonstrates that, on one hand, standards serve a boundary-bridging function that affords bucket monitoring data a crucial measure of legitimacy among experts. On the other hand, standards simultaneously serve a boundary-policing function, allowing experts to dismiss bucket data as irrelevant to the central project of air quality assessment. The article thus calls attention to standard setting as an important site of intervention for citizen science-based efforts to democratize science and policy.) <|cite_end|> <|cite_start|> (Reference: Next steps for citizen science: Strategic investments and coordination are needed for citizen science to reach its full potential. Around the globe, thousands of research projects are engaging millions of individuals—many of whom are not trained as scientists—in collecting, categorizing, transcribing, or analyzing scientific data. These projects, known as citizen science, cover a breadth of topics from microbiomes to native bees to water quality to galaxies. Most projects obtain or manage scientific information at scales or resolutions unattainable by individual researchers or research teams, whether enrolling thousands of individuals collecting data across several continents, enlisting small armies of participants in categorizing vast quantities of online data, or organizing small groups of volunteers to tackle local problems.) <|cite_end|> <|cite_start|> (Reference: Citizen Science for public health: Abstract Community engagement in public health policy is easier said than done. One reason is that public health policy is produced in a complex process resulting in policies that may appear not to link up to citizen perspectives. We therefore address the central question as to whether citizen engagement in knowledge production could enable inclusive health policy making. Building on non-health work fields, we describe different types of citizen engagement in scientific research, or ‘Citizen Science’. We describe the challenges that Citizen Science poses for public health, and how these could be addressed. Despite these challenges, we expect that Citizen Science or similar approaches such as participatory action research and ‘popular epidemiology’ may yield better knowledge, empowered communities, and improved community health. We provide a draft framework to enable evaluation of Citizen Science in practice, consisting of a descriptive typology of different kinds of Citizen Science and a causal framework that shows how Citizen Science in public health might benefit both the knowledge produced as well as the ‘Citizen Scientists’ as active participants.) <|cite_end|>. To assist community members in identifying evidence from large datasets efficiently, prior projects used machine learning algorithms to predict future events or interpret collected data <|cite_start|> (Reference: Pattern Recognition and Machine learning: Artificial intelligence, robotics, and machine learning are not futuristic dreams anymore. The early consequences of these technologies are upon us already. Industrial robots, self-driving cars, an...) <|cite_end|> <|cite_start|> (Reference: Machine learning: Introduction and overview of machine learning and its applications. Unsupervised and supervised learning. Discriminative and generative models. Prediction. Generalization. Classification. Nearest neighbors. Naïve Bayes. Discriminant analysis. Cross-validation. Model selection. Overfitting. Bootstrap. Regression. Regularization. Ridge regression. Lasso. Variable Selection. Binary and multi-class regression. Dimension reduction. PCA. ICA. Kernel smoothers. Support Vector Machines. Decision trees. Gaussian processes. Mixture models.) <|cite_end|> <|cite_start|> (Reference: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman: ) <|cite_end|> <|cite_start|> (Reference: An Introduction to Statistical Learning: ) <|cite_end|> <|cite_start|> (Reference: Machine learning: trends, perspectives, and prospects: Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.) <|cite_end|> <|cite_start|> (Reference: Statistical solutions for error and bias in global citizen science datasets: ) <|cite_end|> <|cite_start|> (Reference: A systematic review of data mining and machine learning for air pollution epidemiology: ) <|cite_end|>.
\subsubsection{Prediction}
Prediction techniques aim to forecast the future accurately based on previous observations. Zheng {\em et al.} developed a framework to predict air quality readings of a monitoring station over the next 48 hours based on meteorological data, weather forecasts, and sensor readings from other nearby monitoring stations <|cite_start|> (Reference: Forecasting fine-grained air quality based on big data: In this paper, we forecast the reading of an air quality monitoring station over the next 48 hours, using a data-driven method that considers current meteorological data, weather forecasts, and air quality data of the station and that of other stations within a few hundred kilometers. Our predictive model is comprised of four major components: 1) a linear regression-based temporal predictor to model the local factors of air quality, 2) a neural network-based spatial predictor to model global factors, 3) a dynamic aggregator combining the predictions of the spatial and temporal predictors according to meteorological data, and 4) an inflection predictor to capture sudden changes in air quality. We evaluate our model with data from 43 cities in China, surpassing the results of multiple baseline methods. We have deployed a system with the Chinese Ministry of Environmental Protection, providing 48-hour fine-grained air quality forecasts for four major Chinese cities every hour. The forecast function is also enabled on Microsoft Bing Map and MS cloud platform Azure. Our technology is general and can be applied globally for other cities.) <|cite_end|>. Azid {\em et al.} used principal component analysis and an artificial neural network to identify pollution sources and predict air pollution <|cite_start|> (Reference: Prediction of the Level of Air Pollution Using Principal Component Analysis and Artificial Neural Network Techniques: a Case Study in Malaysia: ) <|cite_end|>. Donnelly {\em et al.} combined kernel regression and multiple linear regression to forecast the concentrations of nitrogen dioxide over the next 24 and 48 hours <|cite_start|> (Reference: Real time air quality forecasting using integrated parametric and non-parametric regression techniques: ) <|cite_end|>. Hsieh {\em et al.} utilized a graphical model to predict the air quality of a given location grid based on data from sparse monitoring stations <|cite_start|> (Reference: Inferring Air Quality for Station Location Recommendation Based on Urban Big Data: This paper tries to answer two questions. First, how to infer real-time air quality of any arbitrary location given environmental data and historical air quality data from very sparse monitoring locations. Second, if one needs to establish few new monitoring stations to improve the inference quality, how to determine the best locations for such purpose? The problems are challenging since for most of the locations (>99%) in a city we do not have any air quality data to train a model from. We design a semi-supervised inference model utilizing existing monitoring data together with heterogeneous city dynamics, including meteorology, human mobility, structure of road networks, and point of interests (POIs). We also propose an entropy-minimization model to suggest the best locations to establish new monitoring stations. We evaluate the proposed approach using Beijing air quality data, resulting in clear advantages over a series of state-of-the-art and commonly used methods.) <|cite_end|>. These studies applied prediction techniques to help citizens plan daily activities and also inform regulators in controlling air pollution sources. Most of these studies focus on forecasting or interpolating sensing data. To the best of our knowledge, none of them considered human-reported data in their predictive models.
\subsubsection{Interpretation}
Interpretation techniques aim to extract knowledge from the collected data. This knowledge can help to discover potential interrelationships between predictors and responses, which is known to be essential in analyzing the impacts of environmental issues in the long-term <|cite_start|> (Reference: Popular epidemiology and toxic waste contamination: lay and professional ways of knowing: Building on a detailed study of the Woburn, Massachusetts, childhood leukemia cluster, this paper examines lay and professional ways of knowing about environmental health risks. Of particular interest are differences between lay and professional groups' definitions of data quality, methods of analysis, traditionally accepted levels of measurement and statistical significance, and relations between scientific method and public policy. This paper conceptualizes the hazard-detection and solution-seeking activities of Love Canal, Woburn, and other communities as popular epidemiology: the process by which lay persons gather data and direct and marshal the knowledge and resources of experts in order to understand the epidemiology of disease, treat existing and prevent future disease, and remove the responsible environmental contaminants. Based on different needs, goals, and methods, laypeople and professionals have conflicting perspectives on how to investigate and interpret environmental health data.) <|cite_end|> <|cite_start|> (Reference: Citizen Science for public health: Abstract Community engagement in public health policy is easier said than done. One reason is that public health policy is produced in a complex process resulting in policies that may appear not to link up to citizen perspectives. We therefore address the central question as to whether citizen engagement in knowledge production could enable inclusive health policy making. Building on non-health work fields, we describe different types of citizen engagement in scientific research, or ‘Citizen Science’. We describe the challenges that Citizen Science poses for public health, and how these could be addressed. Despite these challenges, we expect that Citizen Science or similar approaches such as participatory action research and ‘popular epidemiology’ may yield better knowledge, empowered communities, and improved community health. We provide a draft framework to enable evaluation of Citizen Science in practice, consisting of a descriptive typology of different kinds of Citizen Science and a causal framework that shows how Citizen Science in public health might benefit both the knowledge produced as well as the ‘Citizen Scientists’ as active participants.) <|cite_end|>. Gass {\em et al.} investigated the joint effects of outdoor air pollutants on emergency department visits for pediatric asthma by applying Decision Tree learning <|cite_start|> (Reference: Classification and regression trees for epidemiologic research: an air pollution example: ) <|cite_end|>. The authors suggested using Decision Tree learning to hypothesize about potential joint effects of predictors for further investigation. Stingone {\em et al.} trained decision trees to identify possible interaction patterns between air pollutants and math test scores of kindergarten children <|cite_start|> (Reference: Using machine learning to identify air pollution exposure profiles associated with early cognitive skills among U.S. children.: ) <|cite_end|>. Hochachka {\em et al.} fused traditional statistical techniques with boosted regression trees to extract species distribution patterns from the data collected via the eBird platform <|cite_start|> (Reference: Data-intensive science applied to broad-scale citizen science.: ) <|cite_end|>. These previous studies utilized domain knowledge to fit machine learning models with high explanatory powers on filtered citizen science data. In this paper, we also used Decision Tree to explore hidden interrelationships in the data. This extracted knowledge can reveal local concerns and serve as convincing evidence for communities in taking action. <|paper_end|> | [
"<|reference_start|> Buckets of resistance: Standards and the effectiveness of citizen science: In light of arguments that citizen science has the potential to make environmental knowledge and policy more robust and democratic, this article inquires into the factors that shape the ability of citizen science to actually influence scientists and decision makers. Using the case of community-based air toxics monitoring with ‘‘buckets,’’ it argues that citizen science’s effectiveness is significantly influenced by standards and standardized practices. It demonstrates that, on one hand, standards serve a boundary-bridging function that affords bucket monitoring data a crucial measure of legitimacy among experts. On the other hand, standards simultaneously serve a boundary-policing function, allowing experts to dismiss bucket data as irrelevant to the central project of air quality assessment. The article thus calls attention to standard setting as an important site of intervention for citizen science-based efforts to democratize science and policy. <|reference_end|>",
"<|reference_start|> Citizen Scientists: Reconnecting Science with Civil Society: <|reference_end|>",
"<|reference_start|> Designing Interactive Systems for Community Citizen Science: Citizen science forges partnerships between experts and citizens through collaboration and has become a trend in public participation in scientific research over the past decade. Besides this trend, public participation can also contribute to participatory democracy, which empowers citizens to advocate for their local problems. This strategy supports citizens to form a community, increase environmental monitoring, gather evidence, and tell convincing stories. Researchers believe that this “community citizen science” strategy can contribute to the well-being of communities by giving them the power to influence the general public and decision makers. Community citizen science requires collecting, curating, visualizing, analyzing, and interpreting multiple types of data over a large spacetime scale. This is highly dependent on community engagement (i.e., the involvement of citizens in local neighborhoods). Such large-scale tasks require the assistance of innovative computational tools to give technology affordance to communities. However, existing tools often focus on only one type of data, and thus researchers need to develop tools from scratch. Moreover, there is a lack of design patterns for researchers to reference when developing such tools. Furthermore, existing tools are typically treated as products rather than ongoing infrastructures that sustain community engagement. This research studies the methodology of developing computational tools by using visualization, crowdsourcing, and artificial intelligence techniques to support the entire community engagement lifecycle, from initiation, maintenance, to evaluation. This research will make methodological and empirical contributions to community citizen science and sustainable human-computer interaction. Methodological contributions include detailed case studies with applied techniques from information technology systems that are deployed in real-world contexts. Empirical contributions include generalizable empirical insights for developing interactive systems that integrate multiple types of scientific data. In this dissertation, I first define “community citizen science” and explain corresponding design challenges. Then, I review existing computational tools and techniques that are related to this research. Next, I present four interactive systems centered around the research scope: (1) a timelapse editor that supports building evidence-based narratives, (2) an air quality monitoring system that integrates heterogeneous data and computer vision to support the formation of scientific knowledge, (3) a visualization tool that reveals the impact of oil and gas development, and (4) a mobile crowdsourced application for reporting and visualizing pollution odors. Finally, I synthesize findings from all four works into generalizable design implications for future researchers and developers. <|reference_end|>",
"<|reference_start|> A Low-tech Sensing System for Particulate Pollution: We present an ultra low-cost sensing system, which enables participants to see and reflect on the particulates in their air. Drawing on prior work in paper computing, we introduce small sensors for particulate pollution that can be easily assembled from common paper materials for less than $1 USD, and mailed by regular postal service to residents of entire neighborhoods, cities, or geographic regions. Recipients collect particulate samples using these sensors and mail them back to a central location, where the particles are viewed and analyzed via a microscope. The data, which includes rich images of actual air pollution particles, can then be broadcast to larger audiences. This paper details the design of our system and its deployment with a local air quality activist community. We conclude by highlighting the tradeoffs between high-tech and low-tech sensing, and suggest opportunities for tangible interaction to support rich, new ways of seeing our environment. <|reference_end|>"
] | [
9,
26,
32,
53
] | {"<|multi_cite_1_1|>": "ss-1190761", "<|multi_cite_1_2|>": "ss-1447758", "<|multi_cite_1_3|>": "ss-1131638", "<|multi_cite_1_4|>": "ss-1131639", "<|cite_2|>": "ss-1131640", "<|cite_3|>": "ss-1131641", "<|cite_4|>": "ss-1131642", "<|cite_5|>": "ss-1131643", "<|cite_6|>": "ss-1131644", "<|multi_cite_7_1|>": "ss-1131645", "<|multi_cite_7_2|>": "ss-1131646", "<|cite_8|>": "ss-826293", "<|multi_cite_10_2|>": "ss-1261926", "<|multi_cite_10_3|>": "ss-1261930", "<|multi_cite_10_4|>": "ss-1445716", "<|multi_cite_10_5|>": "ss-1189865", "<|multi_cite_12_1|>": "ss-1447769", "<|multi_cite_12_2|>": "ss-1261928", "<|multi_cite_12_3|>": "ss-1131647", "<|multi_cite_12_4|>": "ss-1131648", "<|multi_cite_12_5|>": "ss-1445714", "<|multi_cite_12_6|>": "ss-1447766", "<|multi_cite_12_7|>": "ss-1036396", "<|multi_cite_12_8|>": "ss-1131649", "<|multi_cite_13_1|>": "ss-1445715", "<|multi_cite_13_3|>": "ss-1447767", "<|multi_cite_13_4|>": "ss-1447768", "<|multi_cite_13_5|>": "ss-989014", "<|multi_cite_13_6|>": "ss-993133", "<|multi_cite_13_7|>": "ss-1131650", "<|multi_cite_13_8|>": "ss-1131651", "<|multi_cite_13_10|>": "ss-989012", "<|multi_cite_13_11|>": "ss-1131652", "<|multi_cite_14_1|>": "ss-1447761", "<|multi_cite_14_2|>": "ss-1447762", "<|multi_cite_14_3|>": "ss-1447763", "<|multi_cite_14_4|>": "ss-1447764", "<|multi_cite_14_5|>": "ss-1447765", "<|multi_cite_14_6|>": "ss-1131653", "<|cite_15|>": "ss-1131654", "<|cite_17|>": "ss-1445717", "<|multi_cite_18_1|>": "ss-1445718", "<|multi_cite_18_2|>": "ss-1131655", "<|cite_19|>": "ss-1447776", "<|multi_cite_20_1|>": "ss-856987", "<|multi_cite_20_2|>": "ss-1263589", "<|multi_cite_22_1|>": "ss-1131656", "<|multi_cite_22_2|>": "ss-951165", "<|cite_23|>": "ss-780891", "<|multi_cite_24_1|>": "ss-1131657", "<|multi_cite_24_2|>": "ss-1131658", "<|cite_25|>": "ss-1131659", "<|multi_cite_26_1|>": "ss-1131660", "<|multi_cite_26_2|>": "ss-1447775", "<|multi_cite_27_2|>": "ss-1131661", "<|multi_cite_28_1|>": "ss-1131662", "<|multi_cite_28_2|>": "ss-1189866", "<|multi_cite_29_1|>": "ss-1131647", "<|multi_cite_29_2|>": "ss-1131645", "<|multi_cite_29_3|>": "ss-1261926", "<|multi_cite_29_4|>": "ss-1131663", "<|multi_cite_30_1|>": "ss-793106", "<|multi_cite_30_2|>": "ss-1008655", "<|multi_cite_30_3|>": "ss-1097483", "<|multi_cite_30_4|>": "ss-1017134", "<|multi_cite_30_5|>": "ss-743193", "<|multi_cite_30_6|>": "ss-1131661", "<|multi_cite_30_7|>": "ss-1131664", "<|cite_31|>": "ss-1268321", "<|cite_32|>": "ss-1131665", "<|cite_33|>": "ss-1131666", "<|cite_34|>": "ss-1131667", "<|multi_cite_35_1|>": "ss-1131668", "<|multi_cite_35_2|>": "ss-1131663", "<|cite_36|>": "ss-1131669", "<|cite_37|>": "ss-1131670", "<|cite_38|>": "ss-2357457"} |
2308.14133 | <|paper_start|> Title: Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few Exemplars
Abstract: Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few Exemplars: The Segment Anything Model (SAM) has demonstrated remarkable capabilities of scaled-up segmentation models, enabling zero-shot generalization across a variety of domains. By leveraging large-scale foundational models as pre-trained models, it is a natural progression to fine-tune SAM for specific domains to further enhance performances. However, the adoption of foundational models in the medical domain presents a challenge due to the difficulty and expense of labeling sufficient data for adaptation within hospital systems. In this paper, we introduce an efficient and practical approach for fine-tuning SAM using a limited number of exemplars, making it suitable for such scenarios. Our approach combines two established techniques from the literature: an exemplar-guided synthesis module and the widely recognized Low-Rank Adaptation (LoRA) fine-tuning strategy, serving as data-level and model-level attempts respectively. Interestingly, our empirical findings suggest that SAM can be effectively aligned within the medical domain even with few labeled data. We validate our approach through experiments on brain tumor segmentation (BraTS) and multi-organ CT segmentation (Synapse). The comprehensive results underscore the feasibility and effectiveness of such an approach, paving the way for the practical application of SAM in the medical domain.
Introduction
Nowadays, foundation models <|cite_start|> (Reference: On the Opportunities and Risks of Foundation Models: AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.) <|cite_end|> have revolutionized the AI community, demonstrating immense potential to solve tasks within an integrated framework and achieve remarkable zero-shot and few-shot performances <|cite_start|> (Reference: Language Models are Few-Shot Learners: Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.) <|cite_end|> <|cite_start|> (Reference: PaLM: Scaling Language Modeling with Pathways: Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.) <|cite_end|>.
The Segment Anything Model (SAM) <|cite_start|> (Reference: Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.) <|cite_end|>, a promptable model trained on over 1 billion masks and 11 million images, makes an attempt to build a foundation model for segmentation. SAM has shown impressive zero-shot segmentation ability on new data across different distributions and tasks.
However, SAM's performance has been found to be limited in certain domains, such as medical images segmentation <|cite_start|> (Reference: Sam fails to segment anything?--sam-adapter: Adapting sam in underperformed scenes: Camouflage, shadow, and more: P.R) <|cite_end|> <|cite_start|> (Reference: Segment Anything Model for Medical Images?: The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: 1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. 2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. 3) SAM performed better with manual hints, especially box, than the Everything mode. 4) SAM could help human annotation with high labeling quality and less time. 5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. 6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. 7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. 8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.) <|cite_end|> <|cite_start|> (Reference: Customized Segment Anything Model for Medical Image Segmentation: We propose SAMed, a general solution for medical image segmentation. Different from the previous methods, SAMed is built upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation. SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets. We also observe the warmup finetuning strategy and the AdamW optimizer lead SAMed to successful convergence and lower loss. Different from SAM, SAMed could perform semantic segmentation on medical images. Our trained SAMed model achieves 81.88 DSC and 20.64 HD on the Synapse multi-organ segmentation dataset, which is on par with the state-of-the-art methods. We conduct extensive experiments to validate the effectiveness of our design. Since SAMed only updates a small fraction of the SAM parameters, its deployment cost and storage cost are quite marginal in practical usage. The code of SAMed is available at https://github.com/hitachinsk/SAMed.) <|cite_end|>, low-level structural segmentation <|cite_start|> (Reference: Sam fails to segment anything?--sam-adapter: Adapting sam in underperformed scenes: Camouflage, shadow, and more: P.R) <|cite_end|>, and intricate objection segmentation <|cite_start|> (Reference: Segment Anything in High Quality: The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 10 diverse segmentation datasets across different downstream tasks, where 8 out of them are evaluated in a zero-shot transfer protocol. Our code and pretrained models are at https://github.com/SysCV/SAM-HQ.) <|cite_end|>. To address these limitations, researchers have sought to enhance the performance of pre-trained models across domains by fine-tuning SAM or externally designed components <|cite_start|> (Reference: Sam fails to segment anything?--sam-adapter: Adapting sam in underperformed scenes: Camouflage, shadow, and more: P.R) <|cite_end|> <|cite_start|> (Reference: Segment Anything in High Quality: The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 10 diverse segmentation datasets across different downstream tasks, where 8 out of them are evaluated in a zero-shot transfer protocol. Our code and pretrained models are at https://github.com/SysCV/SAM-HQ.) <|cite_end|> <|cite_start|> (Reference: Segment Anything in Medical Images: Medical image segmentation is a critical component in clinical practice, facilitating accurate diagnosis, treatment planning, and disease monitoring. However, existing methods, often tailored to specific modalities or disease types, lack generalizability across the diverse spectrum of medical image segmentation tasks. Here we present MedSAM, a foundation model designed for bridging this gap by enabling universal medical image segmentation. The model is developed on a large-scale medical image dataset with 1,570,263 image-mask pairs, covering 10 imaging modalities and over 30 cancer types. We conduct a comprehensive evaluation on 86 internal validation tasks and 60 external validation tasks, demonstrating better accuracy and robustness than modality-wise specialist models. By delivering accurate and efficient segmentation across a wide spectrum of tasks, MedSAM holds significant potential to expedite the evolution of diagnostic tools and the personalization of treatment plans.) <|cite_end|> <|cite_start|> (Reference: Customized Segment Anything Model for Medical Image Segmentation: We propose SAMed, a general solution for medical image segmentation. Different from the previous methods, SAMed is built upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation. SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets. We also observe the warmup finetuning strategy and the AdamW optimizer lead SAMed to successful convergence and lower loss. Different from SAM, SAMed could perform semantic segmentation on medical images. Our trained SAMed model achieves 81.88 DSC and 20.64 HD on the Synapse multi-organ segmentation dataset, which is on par with the state-of-the-art methods. We conduct extensive experiments to validate the effectiveness of our design. Since SAMed only updates a small fraction of the SAM parameters, its deployment cost and storage cost are quite marginal in practical usage. The code of SAMed is available at https://github.com/hitachinsk/SAMed.) <|cite_end|>. As a result, fine-tuning SAM with medical images could be more feasible and promising to facilitate segmentation tasks in real clinical applications <|cite_start|> (Reference: Customized Segment Anything Model for Medical Image Segmentation: We propose SAMed, a general solution for medical image segmentation. Different from the previous methods, SAMed is built upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation. SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets. We also observe the warmup finetuning strategy and the AdamW optimizer lead SAMed to successful convergence and lower loss. Different from SAM, SAMed could perform semantic segmentation on medical images. Our trained SAMed model achieves 81.88 DSC and 20.64 HD on the Synapse multi-organ segmentation dataset, which is on par with the state-of-the-art methods. We conduct extensive experiments to validate the effectiveness of our design. Since SAMed only updates a small fraction of the SAM parameters, its deployment cost and storage cost are quite marginal in practical usage. The code of SAMed is available at https://github.com/hitachinsk/SAMed.) <|cite_end|>.
Despite these advancements, the adoption of medical segmentation in real hospitals remains challenging due to the need for large curated datasets. Fine-tuning SAM on labeled images of specific instruments is also required to align model's understanding of the domain scope within the hospital. This introduces a time-consuming, labor-intensive, and expensive process of data labeling <|cite_start|> (Reference: Annotation-efficient deep learning for automatic medical image segmentation: Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.) <|cite_end|>. Consequently, there is growing interest in developing effective methods to leverage limited annotated data for training deep learning models <|cite_start|> (Reference: Contrastive learning of global and local features for medical image segmentation with limited annotations: A key requirement for the success of supervised deep learning is a large labeled dataset - a condition that is difficult to meet in medical image analysis. Self-supervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8% of benchmark performance using only two labeled MRI volumes for training, corresponding to only 4% (for ACDC) of the training data used to train the benchmark. The code is made public at https://github.com/krishnabits001/domain_specific_cl.) <|cite_end|> <|cite_start|> (Reference: Exemplar Learning for Medical Image Segmentation: Medical image annotation typically requires expert knowledge and hence incurs time-consuming and expensive data annotation costs. To alleviate this burden, we propose a novel learning scenario, Exemplar Learning (EL), to explore automated learning processes for medical image segmentation with a single annotated image example. This innovative learning task is particularly suitable for medical image segmentation, where all categories of organs can be presented in one single image and annotated all at once. To address this challenging EL task, we propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation that enables innovative exemplar-based data synthesis, pixel-prototype based contrastive embedding learning, and pseudo-label based exploitation of the unlabeled data. Specifically, ELSNet introduces two new modules for image segmentation: an exemplar-guided synthesis module, which enriches and diversifies the training set by synthesizing annotated samples from the given exemplar, and a pixel-prototype based contrastive embedding module, which enhances the discriminative capacity of the base segmentation model via contrastive representation learning. Moreover, we deploy a two-stage process for segmentation model training, which exploits the unlabeled data with predicted pseudo segmentation labels. To evaluate this new learning framework, we conduct extensive experiments on several organ segmentation datasets and present an in-depth analysis. The empirical results show that the proposed exemplar learning framework produces effective segmentation results.) <|cite_end|> <|cite_start|> (Reference: Annotation-efficient deep learning for automatic medical image segmentation: Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.) <|cite_end|>.
Among the various attempts to utilize small sets of labeled data, we consider exemplar-based learning an intriguing approach. This scenario, which involves using a single expert-annotated image that covers all parts of the whole organ category set <|cite_start|> (Reference: Exemplar Learning for Medical Image Segmentation: Medical image annotation typically requires expert knowledge and hence incurs time-consuming and expensive data annotation costs. To alleviate this burden, we propose a novel learning scenario, Exemplar Learning (EL), to explore automated learning processes for medical image segmentation with a single annotated image example. This innovative learning task is particularly suitable for medical image segmentation, where all categories of organs can be presented in one single image and annotated all at once. To address this challenging EL task, we propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation that enables innovative exemplar-based data synthesis, pixel-prototype based contrastive embedding learning, and pseudo-label based exploitation of the unlabeled data. Specifically, ELSNet introduces two new modules for image segmentation: an exemplar-guided synthesis module, which enriches and diversifies the training set by synthesizing annotated samples from the given exemplar, and a pixel-prototype based contrastive embedding module, which enhances the discriminative capacity of the base segmentation model via contrastive representation learning. Moreover, we deploy a two-stage process for segmentation model training, which exploits the unlabeled data with predicted pseudo segmentation labels. To evaluate this new learning framework, we conduct extensive experiments on several organ segmentation datasets and present an in-depth analysis. The empirical results show that the proposed exemplar learning framework produces effective segmentation results.) <|cite_end|>, can significantly reduce the labeling expenses in hospital systems. This raises the question: \textbf{Can we fine-tune foundation models (SAM) on few exemplars to achieve significant improvements in medical image segmentation?}
In this paper, we integrate two well-established techniques from the literature to serve as the data-level and model-level attempts. On the data-level, we employ the exemplar-guided synthesis module in <|cite_start|> (Reference: Exemplar Learning for Medical Image Segmentation: Medical image annotation typically requires expert knowledge and hence incurs time-consuming and expensive data annotation costs. To alleviate this burden, we propose a novel learning scenario, Exemplar Learning (EL), to explore automated learning processes for medical image segmentation with a single annotated image example. This innovative learning task is particularly suitable for medical image segmentation, where all categories of organs can be presented in one single image and annotated all at once. To address this challenging EL task, we propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation that enables innovative exemplar-based data synthesis, pixel-prototype based contrastive embedding learning, and pseudo-label based exploitation of the unlabeled data. Specifically, ELSNet introduces two new modules for image segmentation: an exemplar-guided synthesis module, which enriches and diversifies the training set by synthesizing annotated samples from the given exemplar, and a pixel-prototype based contrastive embedding module, which enhances the discriminative capacity of the base segmentation model via contrastive representation learning. Moreover, we deploy a two-stage process for segmentation model training, which exploits the unlabeled data with predicted pseudo segmentation labels. To evaluate this new learning framework, we conduct extensive experiments on several organ segmentation datasets and present an in-depth analysis. The empirical results show that the proposed exemplar learning framework produces effective segmentation results.) <|cite_end|> to generate a synthetic training dataset through geometric and intensity transformation. On the model-level, our fine-tuning strategy is based on the widely recognized Low-Rank Adaptation (LoRA) <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|> and specifically, we adhere to the basic architecture outlined in <|cite_start|> (Reference: Customized Segment Anything Model for Medical Image Segmentation: We propose SAMed, a general solution for medical image segmentation. Different from the previous methods, SAMed is built upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation. SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets. We also observe the warmup finetuning strategy and the AdamW optimizer lead SAMed to successful convergence and lower loss. Different from SAM, SAMed could perform semantic segmentation on medical images. Our trained SAMed model achieves 81.88 DSC and 20.64 HD on the Synapse multi-organ segmentation dataset, which is on par with the state-of-the-art methods. We conduct extensive experiments to validate the effectiveness of our design. Since SAMed only updates a small fraction of the SAM parameters, its deployment cost and storage cost are quite marginal in practical usage. The code of SAMed is available at https://github.com/hitachinsk/SAMed.) <|cite_end|>. Notably, we configure the ViT-Base image encoder and update a total of 6.32 million parameters. Unlike many works relying on A100 40/80G GPUs, all of our experiments are executable on more accessible GPUs such as the 3090 24G GPUs.
We assess the effectiveness of our approach on two medical image segmentation tasks: brain tumor segmentation (BraTS 2018 <|cite_start|> (Reference: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features: ) <|cite_end|> <|cite_start|> (Reference: Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.) <|cite_end|> <|cite_start|> (Reference: The multimodal brain tumor image segmentation benchmark (BraTS): In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.) <|cite_end|>) and multi-organ CT segmentation (Synapse\footnote{\url{https://www.synapse.org/\#!Synapse:syn3193805/wiki/217789}}). Extensive results suggest that fine-tuning SAM on a few exemplars can strike a balance between accuracy and annotation labor, offering a cost-effective solution for medical image segmentation.
In summary, our contributions are twofold: (1) We introduce the attempt of fine-tuning the foundation segmentation model SAM with few exemplars for medical image segmentation. (2) We present comprehensive results on two datasets from different sub-domains, using only 1\% labeled data, demonstrating the feasibility of this cost-effective solution. <|paper_end|> | [
"<|reference_start|> Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision. <|reference_end|>",
"<|reference_start|> Segment Anything in High Quality: The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 10 diverse segmentation datasets across different downstream tasks, where 8 out of them are evaluated in a zero-shot transfer protocol. Our code and pretrained models are at https://github.com/SysCV/SAM-HQ. <|reference_end|>",
"<|reference_start|> Customized Segment Anything Model for Medical Image Segmentation: We propose SAMed, a general solution for medical image segmentation. Different from the previous methods, SAMed is built upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation. SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets. We also observe the warmup finetuning strategy and the AdamW optimizer lead SAMed to successful convergence and lower loss. Different from SAM, SAMed could perform semantic segmentation on medical images. Our trained SAMed model achieves 81.88 DSC and 20.64 HD on the Synapse multi-organ segmentation dataset, which is on par with the state-of-the-art methods. We conduct extensive experiments to validate the effectiveness of our design. Since SAMed only updates a small fraction of the SAM parameters, its deployment cost and storage cost are quite marginal in practical usage. The code of SAMed is available at https://github.com/hitachinsk/SAMed. <|reference_end|>",
"<|reference_start|> Exemplar Learning for Medical Image Segmentation: Medical image annotation typically requires expert knowledge and hence incurs time-consuming and expensive data annotation costs. To alleviate this burden, we propose a novel learning scenario, Exemplar Learning (EL), to explore automated learning processes for medical image segmentation with a single annotated image example. This innovative learning task is particularly suitable for medical image segmentation, where all categories of organs can be presented in one single image and annotated all at once. To address this challenging EL task, we propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation that enables innovative exemplar-based data synthesis, pixel-prototype based contrastive embedding learning, and pseudo-label based exploitation of the unlabeled data. Specifically, ELSNet introduces two new modules for image segmentation: an exemplar-guided synthesis module, which enriches and diversifies the training set by synthesizing annotated samples from the given exemplar, and a pixel-prototype based contrastive embedding module, which enhances the discriminative capacity of the base segmentation model via contrastive representation learning. Moreover, we deploy a two-stage process for segmentation model training, which exploits the unlabeled data with predicted pseudo segmentation labels. To evaluate this new learning framework, we conduct extensive experiments on several organ segmentation datasets and present an in-depth analysis. The empirical results show that the proposed exemplar learning framework produces effective segmentation results. <|reference_end|>"
] | [
3,
8,
12,
19
] | {"<|cite_1|>": "arxiv-361235", "<|multi_cite_2_1|>": "arxiv-268228", "<|multi_cite_2_2|>": "arxiv-411079", "<|cite_3|>": "arxiv-494904", "<|multi_cite_4_1|>": "ss-1357421", "<|multi_cite_4_2|>": "arxiv-500665", "<|multi_cite_4_3|>": "arxiv-500224", "<|cite_5|>": "ss-1357421", "<|cite_6|>": "arxiv-512145", "<|multi_cite_7_1|>": "ss-1357421", "<|multi_cite_7_2|>": "arxiv-512145", "<|multi_cite_7_3|>": "arxiv-499533", "<|multi_cite_7_4|>": "arxiv-500224", "<|cite_8|>": "arxiv-500224", "<|cite_9|>": "arxiv-308815", "<|multi_cite_10_1|>": "arxiv-272856", "<|multi_cite_10_2|>": "arxiv-410821", "<|multi_cite_10_3|>": "arxiv-308815", "<|cite_11|>": "arxiv-410821", "<|cite_12|>": "arxiv-410821", "<|cite_13|>": "arxiv-349236", "<|cite_14|>": "arxiv-500224", "<|multi_cite_15_1|>": "ss-770741", "<|multi_cite_15_2|>": "arxiv-179331", "<|multi_cite_15_3|>": "ss-807620"} |
2108.08173 | <|paper_start|> Title: Wideband Channel Estimation for THz Massive MIMO
Abstract: Wideband Channel Estimation for THz Massive MIMO: Terahertz (THz) communication is considered to be a promising technology for future 6G network. To overcome the severe attenuation and relieve the high power consumption, massive MIMO with hybrid precoding has been widely considered for THz communication. However, accurate wideband channel estimation is challenging in THz massive MIMO systems. The existing wideband channel estimation schemes based on the ideal assumption of common sparse channel support will suffer from a severe performance loss due to the beam split effect. In this paper, we propose a beam split pattern detection based channel estimation scheme to realize reliable wideband channel estimation. Specifically, a comprehensive analysis on the angle-domain sparse structure of the wideband channel is provided by considering the beam split effect. Based on the analysis, we define a series of index sets called as beam split patterns, which are proved to have a one-to-one match to different physical channel directions. Inspired by this one-to-one match, we propose to estimate the physical channel direction by exploiting beam split patterns at first. Then, the sparse channel supports at different subcarriers can be obtained by utilizing a support detection window. This support detection window is generated by expanding the beam split pattern which is determined by the obtained physical channel direction. The above estimation procedure will be repeated path by path until all path components are estimated. The proposed scheme exploits the wideband channel property implied by the beam split effect, which can significantly improve the channel estimation accuracy. Simulation results show that the proposed scheme is able to achieve higher accuracy than existing schemes.
Introduction
Terahertz (THz) communication has been considered as one of the promising techniques for future 6G network, since it can provide tenfold bandwidth increase and thus support ultra-high transmission rate <|cite_start|> (Reference: {Wireless Communications and applications above 100 GHz: Opportunities and challenges for 6G and beyond: Frequencies from 100 GHz to 3 THz are promising bands for the next generation of wireless communication systems because of the wide swaths of unused and unexplored spectrum. These frequencies also offer the potential for revolutionary applications that will be made possible by new thinking, and advances in devices, circuits, software, signal processing, and systems. This paper describes many of the technical challenges and opportunities for wireless communication and sensing applications above 100 GHz, and presents a number of promising discoveries, novel approaches, and recent results that will aid in the development and implementation of the sixth generation (6G) of wireless networks, and beyond. This paper shows recent regulatory and standard body rulings that are anticipating wireless products and services above 100 GHz and illustrates the viability of wireless cognition, hyper-accurate position location, sensing, and imaging. This paper also presents approaches and results that show how long distance mobile communications will be supported to above 800 GHz since the antenna gains are able to overcome air-induced attenuation, and present methods that reduce the computational complexity and simplify the signal processing used in adaptive antenna arrays, by exploiting the Special Theory of Relativity to create a cone of silence in over-sampled antenna arrays that improve performance for digital phased array antennas. Also, new results that give insights into power efficient beam steering algorithms, and new propagation and partition loss models above 100 GHz are given, and promising imaging, array processing, and position location results are presented. The implementation of spatial consistency at THz frequencies, an important component of channel modeling that considers minute changes and correlations over space, is also discussed. This paper offers the first in-depth look at the vast applications of THz wireless products and applications and provides approaches for how to reduce power and increase performance across several problem domains, giving early evidence that THz techniques are compelling and available for future wireless communications.) <|cite_end|> <|cite_start|> (Reference: Toward 6G networks: Use cases and technologies: Reliable data connectivity is vital for the ever increasingly intelligent, automated, and ubiquitous digital world. Mobile networks are the data highways and, in a fully connected, intelligent digital world, will need to connect everything, including people to vehicles, sensors, data, cloud resources, and even robotic agents. Fifth generation (5G) wireless networks, which are currently being deployed, offer significant advances beyond LTE, but may be unable to meet the full connectivity demands of the future digital society. Therefore, this article discusses technologies that will evolve wireless networks toward a sixth generation (6G) and which we consider as enablers for several potential 6G use cases. We provide a fullstack, system-level perspective on 6G scenarios and requirements, and select 6G technologies that can satisfy them either by improving the 5G design or by introducing completely new communication paradigms.) <|cite_end|> <|cite_start|> (Reference: 6g wireless networks: Vision, requirements, architecture, and key technologies: A key enabler for the intelligent information society of 2030, 6G networks are expected to provide performance superior to 5G and satisfy emerging services and applications. In this article, we present our vision of what 6G will be and describe usage scenarios and requirements for multi-terabyte per second (Tb/s) and intelligent 6G networks. We present a large-dimensional and autonomous network architecture that integrates space, air, ground, and underwater networks to provide ubiquitous and unlimited wireless connectivity. We also discuss artificial intelligence (AI) and machine learning [1], [2] for autonomous networks and innovative air-interface design. Finally, we identify several promising technologies for the 6G ecosystem, including terahertz (THz) communications, very-large-scale antenna arrays [i.e., supermassive (SM) multiple-input, multiple-output (MIMO)], large intelligent surfaces (LISs) and holographic beamforming (HBF), orbital angular momentum (OAM) multiplexing, laser and visible-light communications (VLC), blockchain-based spectrum sharing, quantum communications and computing, molecular communications, and the Internet of Nano-Things.) <|cite_end|> <|cite_start|> (Reference: {A survey on terahertz communications: With the exponential growth of the data traffic in wireless communication systems, terahertz (THz) frequency band is envisioned as a promising candidate to support ultra-broadband for future beyond fifth generation (5G), bridging the gap between millimeter wave (mmWave) and optical frequency ranges. The purpose of this paper is to provide a comprehensive literature review on the development towards THz communications and presents some key technologies faced in THz wireless communication systems. Firstly, despite the substantial hardware problems that have to be developed in terms of the THz solid state superheterodyne receiver, high speed THz modulators and THz antennas, the practical THz channel model and the efficient THz beamforming are also described to compensate for the severe path attenuation. Moreover, two different kinds of lab-level THz communication systems are introduced minutely, named a solid state THz communication system and a spatial direct modulation THz communication system, respectively. The solid state THz system converts intermediate frequency (IF) modulated signal to THz frequency while the direct modulation THz system allows the high power THz sources to input for approving the relatively long distance communications. Finally, we discuss several potential application scenarios as well as some vital technical challenges that will be encountered in the future THz communications.) <|cite_end|> <|cite_start|> (Reference: Multi-wideband waveform design for distance-adaptive wireless communications in the Terahertz band: Terahertz band communication is envisioned as a key technology to satisfy the increasing demand for ultra-high-speed wireless links. In this paper, a multi-wideband waveform design for the THz band is proposed, by exploiting the channel peculiarities including the distance-varying spectral windows, the delay spread and the temporal broadening effects. This scheme allows the dynamical variation of the rate and the transmit power on each sub-window and improves the distance. Moreover, the closed-form expressions of the signal-to-interference-plus-the-noise and bit-error-rate for the multi-wideband waveform are derived, by considering the inter-symbol and inter-band interferences. Then, an optimization framework is formulated to solve for the multi-wideband waveform design parameters of the transmit power and the number of frames, with the aim to maximize the communication distance while satisfying the rate and the transmit power constraints. Four sub-optimal solutions are proposed and compared. The results show that the SINR increases with the transmit power and the number of frames, at the cost of the power consumption and the rate decrease. With the transmit power of 10 dBm, the largest distance to support 10 Gbps for the multi-path propagation is 4 m, which is realized via the power allocation scheme to minimize the power/bit on each sub-window and is 10% improvement over the fixed scheme. However, for the directional transmission, this scheme under-exploits the transmit power severely. Instead, the allocation scheme that minimizes the number of frames outperforms the other three schemes. In terms of the maximum distance that achieves 30 Gbps, this scheme reaches 22.5 m.) <|cite_end|>. To overcome the severe attenuation in the THz band (i.e., $0.1-10$ THz <|cite_start|> (Reference: {Wireless Communications and applications above 100 GHz: Opportunities and challenges for 6G and beyond: Frequencies from 100 GHz to 3 THz are promising bands for the next generation of wireless communication systems because of the wide swaths of unused and unexplored spectrum. These frequencies also offer the potential for revolutionary applications that will be made possible by new thinking, and advances in devices, circuits, software, signal processing, and systems. This paper describes many of the technical challenges and opportunities for wireless communication and sensing applications above 100 GHz, and presents a number of promising discoveries, novel approaches, and recent results that will aid in the development and implementation of the sixth generation (6G) of wireless networks, and beyond. This paper shows recent regulatory and standard body rulings that are anticipating wireless products and services above 100 GHz and illustrates the viability of wireless cognition, hyper-accurate position location, sensing, and imaging. This paper also presents approaches and results that show how long distance mobile communications will be supported to above 800 GHz since the antenna gains are able to overcome air-induced attenuation, and present methods that reduce the computational complexity and simplify the signal processing used in adaptive antenna arrays, by exploiting the Special Theory of Relativity to create a cone of silence in over-sampled antenna arrays that improve performance for digital phased array antennas. Also, new results that give insights into power efficient beam steering algorithms, and new propagation and partition loss models above 100 GHz are given, and promising imaging, array processing, and position location results are presented. The implementation of spatial consistency at THz frequencies, an important component of channel modeling that considers minute changes and correlations over space, is also discussed. This paper offers the first in-depth look at the vast applications of THz wireless products and applications and provides approaches for how to reduce power and increase performance across several problem domains, giving early evidence that THz techniques are compelling and available for future wireless communications.) <|cite_end|>), massive multiple-input multiple-output (MIMO), which can generate directional beams by a large-scale antenna array, is essential for THz communication <|cite_start|> (Reference: {A survey on terahertz communications: With the exponential growth of the data traffic in wireless communication systems, terahertz (THz) frequency band is envisioned as a promising candidate to support ultra-broadband for future beyond fifth generation (5G), bridging the gap between millimeter wave (mmWave) and optical frequency ranges. The purpose of this paper is to provide a comprehensive literature review on the development towards THz communications and presents some key technologies faced in THz wireless communication systems. Firstly, despite the substantial hardware problems that have to be developed in terms of the THz solid state superheterodyne receiver, high speed THz modulators and THz antennas, the practical THz channel model and the efficient THz beamforming are also described to compensate for the severe path attenuation. Moreover, two different kinds of lab-level THz communication systems are introduced minutely, named a solid state THz communication system and a spatial direct modulation THz communication system, respectively. The solid state THz system converts intermediate frequency (IF) modulated signal to THz frequency while the direct modulation THz system allows the high power THz sources to input for approving the relatively long distance communications. Finally, we discuss several potential application scenarios as well as some vital technical challenges that will be encountered in the future THz communications.) <|cite_end|>. However, the traditional fully-digital structure, where each antenna is connected to one radio-frequency (RF) chain, will introduce very high power consumption <|cite_start|> (Reference: An Overview of Signal Processing Techniques for Millimeter Wave MIMO Systems: Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.) <|cite_end|>. To solve this problem, hybrid precoding structure can be used for THz communication <|cite_start|> (Reference: Spatially Sparse Precoding in Millimeter Wave MIMO Systems: Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding/combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered.) <|cite_end|> <|cite_start|> (Reference: BDMA for Millimeter-Wave/Terahertz Massive MIMO Transmission with Per-Beam Synchronization: We propose beam division multiple access (BDMA) with per-beam synchronization (PBS) in time and frequency for wideband massive multiple-input multiple-output (MIMO) transmission over millimeter-wave (mmW)/Terahertz (THz) bands. We first introduce a physically motivated beam domain channel model for massive MIMO and demonstrate that the envelopes of the beam domain channel elements tend to be independent of time and frequency when both the numbers of antennas at base station and user terminals (UTs) tend to infinity. Motivated by the derived beam domain channel properties, we then propose PBS for mmW/THz massive MIMO. We show that both the effective delay and Doppler frequency spreads of wideband massive MIMO channels with PBS are reduced by a factor of the number of UT antennas compared with the conventional synchronization approaches. Subsequently, we apply PBS to BDMA, investigate beam scheduling to maximize the achievable ergodic rates for both uplink and downlink BDMA, and develop a greedy beam scheduling algorithm. Simulation results verify the effectiveness of BDMA with PBS for mmW/THz wideband massive MIMO systems in typical mobility scenarios.) <|cite_end|> <|cite_start|> (Reference: Generalized hybrid beamforming for vehicular connectivity using THz massive MIMO: Hybrid beamforming (HBF) array structure has been extensively demonstrated as the practically feasible architecture for massive multiple-input multiple-output (MIMO). From the perspectives of spectral efficiency (SE), energy efficiency (EE), cost, and hardware complexity, HBF strikes a balanced performance tradeoff when compared with the fully analog and the fully digital implementations. Using the HBF architecture, it is possible to realize three different subarray structures, specifically the fully connected, the subconnected and the overlapped subarray structures. This paper presents a novel generalized framework for the design and performance analysis of the HBF architecture. A parameter, known as the subarray spacing, is introduced such that varying its value leads to the different subarray configurations and the consequent changes in the system performance. Using a realistic power consumption model, we investigate the performance of the generalized HBF array structure in a cellular infrastructure-to-everything application scenario (involving pedestrian and vehicular users) using the single-path terahertz (THz) channel model. Simulation results are provided for the comparative performance analysis of the different subarray structures. The results show that the overlapped subarray implementation maintains a balanced tradeoff in terms of SE, EE, and hardware cost when compared with the popular fully connected and the subconnected structures. The overlapped subarray structure, therefore, offers promising potentials for the beyond-5G networks employing THz massive MIMO to deliver ultrahigh data rates whilst maintaining a balance in the EE of the network.) <|cite_end|>, where the high-dimensional precoder is decomposed into a high-dimensional analog beamformer (usually realized by analog components <|cite_start|> (Reference: Codebook Design for Millimeter-Wave Channel Estimation with Hybrid Precoding Structure: In this paper, we study hierarchical codebook design for channel estimation in millimeter-wave (mmWave) communications with a hybrid precoding structure. Due to the limited saturation power of mmWave power amplifier (PA), we take the per-antenna power constraint (PAPC) into consideration. We first propose a metric, i.e., generalized detection probability (GDP), to evaluate the quality of \emph{an arbitrary codeword}. This metric not only enables an optimization approach for mmWave codebook design, but also can be used to compare the performance of two different codewords/codebooks. To the best of our knowledge, GDP is the first metric particularly for mmWave codebook design for channel estimation. We then propose an approach to design a hierarchical codebook exploiting BeaM Widening with Multi-RF-chain Sub-array technique (BMW-MS). To obtain crucial parameters of BMW-MS, we provide two solutions, namely a low-complexity search (LCS) solution to optimize the GDP metric and a closed-form (CF) solution to pursue a flat beam pattern. Performance comparisons show that BMW-MS/LCS and BMW-MS/CF achieve very close performances, and they outperform the existing alternatives under the PAPC.) <|cite_end|>) and a low-dimensional digital precoder (usually realized by a reduced number of RF chains). Thanks to the sparsity of THz channels, it has been proved that hybrid precoding is able to achieve the near-optimal achievable rate performance <|cite_start|> (Reference: Spatially Sparse Precoding in Millimeter Wave MIMO Systems: Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding/combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered.) <|cite_end|> <|cite_start|> (Reference: BDMA for Millimeter-Wave/Terahertz Massive MIMO Transmission with Per-Beam Synchronization: We propose beam division multiple access (BDMA) with per-beam synchronization (PBS) in time and frequency for wideband massive multiple-input multiple-output (MIMO) transmission over millimeter-wave (mmW)/Terahertz (THz) bands. We first introduce a physically motivated beam domain channel model for massive MIMO and demonstrate that the envelopes of the beam domain channel elements tend to be independent of time and frequency when both the numbers of antennas at base station and user terminals (UTs) tend to infinity. Motivated by the derived beam domain channel properties, we then propose PBS for mmW/THz massive MIMO. We show that both the effective delay and Doppler frequency spreads of wideband massive MIMO channels with PBS are reduced by a factor of the number of UT antennas compared with the conventional synchronization approaches. Subsequently, we apply PBS to BDMA, investigate beam scheduling to maximize the achievable ergodic rates for both uplink and downlink BDMA, and develop a greedy beam scheduling algorithm. Simulation results verify the effectiveness of BDMA with PBS for mmW/THz wideband massive MIMO systems in typical mobility scenarios.) <|cite_end|> <|cite_start|> (Reference: Generalized hybrid beamforming for vehicular connectivity using THz massive MIMO: Hybrid beamforming (HBF) array structure has been extensively demonstrated as the practically feasible architecture for massive multiple-input multiple-output (MIMO). From the perspectives of spectral efficiency (SE), energy efficiency (EE), cost, and hardware complexity, HBF strikes a balanced performance tradeoff when compared with the fully analog and the fully digital implementations. Using the HBF architecture, it is possible to realize three different subarray structures, specifically the fully connected, the subconnected and the overlapped subarray structures. This paper presents a novel generalized framework for the design and performance analysis of the HBF architecture. A parameter, known as the subarray spacing, is introduced such that varying its value leads to the different subarray configurations and the consequent changes in the system performance. Using a realistic power consumption model, we investigate the performance of the generalized HBF array structure in a cellular infrastructure-to-everything application scenario (involving pedestrian and vehicular users) using the single-path terahertz (THz) channel model. Simulation results are provided for the comparative performance analysis of the different subarray structures. The results show that the overlapped subarray implementation maintains a balanced tradeoff in terms of SE, EE, and hardware cost when compared with the popular fully connected and the subconnected structures. The overlapped subarray structure, therefore, offers promising potentials for the beyond-5G networks employing THz massive MIMO to deliver ultrahigh data rates whilst maintaining a balance in the EE of the network.) <|cite_end|>.
\subsection{Prior works}
To design an efficient hybrid precoder, the high-dimensional channel is essential at the base station (BS). However, channel estimation is challenging in massive MIMO systems with hybrid precoding structure <|cite_start|> (Reference: Millimeter Wave Beamforming for Wireless Backhaul and Access in Small Cell Networks: Recently, there has been considerable interest in new tiered network cellular architectures, which would likely use many more cell sites than found today. Two major challenges will be i) providing backhaul to all of these cells and ii) finding efficient techniques to leverage higher frequency bands for mobile access and backhaul. This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell. To overcome the outdoor impairments found in millimeter wave propagation, this paper studies beamforming using large arrays. However, such systems will require narrow beams, increasing sensitivity to movement caused by pole sway and other environmental concerns. To overcome this, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a corresponding performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. The results show significant performance improvement with reduced search time.) <|cite_end|>. Specifically, since the number of RF chains is much smaller than the number of antennas in the hybrid precoding structure, the BS cannot obtain signals at each antenna element simultaneously. As a result, to obtain sufficient observation to accurately estimate the high-dimensional channel, the channel estimation overhead of conventional channel estimation scheme, e.g., least square (LS) scheme, will be unacceptable when the number of antennas is very large <|cite_start|> (Reference: {Millimeter wave mobile communications for 5G cellular: it will work!: The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.) <|cite_end|>.
To deal with this problem, exploiting channel sparsity with the help of compressive sensing algorithms for channel estimation has been widely investigated to realize low-overhead channel estimation in massive MIMO systems <|cite_start|> (Reference: Distributed Compressive CSIT Estimation and Feedback for FDD Multi-user Massive MIMO Systems: To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.) <|cite_end|> <|cite_start|> (Reference: Channel Estimation via Orthogonal Matching Pursuit for Hybrid MIMO Systems in Millimeter Wave Communications: We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures/arrivals (AoDs/AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs/AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.) <|cite_end|> <|cite_start|> (Reference: Compressive channel estimation and tracking for large arrays in mm-wave picocells: We propose and investigate a compressive architecture for estimation and tracking of sparse spatial channels in millimeter (mm) wave picocellular networks. The base stations are equipped with antenna arrays with a large number of elements (which can fit within compact form factors because of the small carrier wavelength) and employ radio frequency (RF) beamforming, so that standard least squares adaptation techniques (which require access to individual antenna elements) are not applicable. We focus on the downlink, and show that “compressive beacons,” transmitted using pseudorandom phase settings at the base station array, and compressively processed using pseudorandom phase settings at the mobile array, provide information sufficient for accurate estimation of the two-dimensional (2D) spatial frequencies associated with the directions of departure of the dominant rays from the base station, and the associated complex gains. This compressive approach is compatible with coarse phase-only control, and is based on a near-optimal sequential algorithm for frequency estimation which approaches the Cramér Rao Lower Bound. The algorithm exploits the geometric continuity of the channel across successive beaconing intervals to reduce the overhead to less than 1% even for very large (32 × 32) arrays. Compressive beaconing is essentially omnidirectional, and hence does not enjoy the SNR and spatial reuse benefits of beamforming obtained during data transmission. We therefore discuss system level design considerations for ensuring that the beacon SNR is sufficient for accurate channel estimation, and that inter-cell beacon interference is controlled by an appropriate reuse scheme.) <|cite_end|> <|cite_start|> (Reference: Compressive Sensing Based Channel Estimation for Millimeter-Wave Full-Dimensional MIMO with Lens-Array: Channel estimation (CE) for millimeter-wave (mmWave) lens-array suffers from prohibitive training overhead, whereas the state-of-the-art solutions require an extra complicated radio frequency phase shift network. By contrast, lens-array using antenna switching network (ASN) simplifies the hardware, but the associated CE is a challenging task due to the constraint imposed by ASN. This paper proposes a compressive sensing (CS)-based CE solution for full-dimensional (FD) lens-array, where the mmWave channel sparsity is exploited. Specifically, we first propose an approach of pilot training under the more severe haraware constraint imposed by ASN, and formulate the associated CE of lens-array as a CS problem. Then, a redundant dictionary is tailored for FD lens-array to combat the power leakage caused by the continuous angles of multipath components. Further, we design the baseband pilot signals to minimize the total mutual coherence of the measurement matrix based on CS theory for more reliable CE performance. Our solution provides a framework for applying CS techniques to lens-array using simple and practical ASN. Simulation results demonstrate the effectiveness of the proposed scheme.) <|cite_end|> <|cite_start|> (Reference: Super-resolution compressed sensing for line spectral estimation: An iterative reweighted approach: Conventional compressed sensing theory assumes signals have sparse representations in a known dictionary. Nevertheless, in many practical applications such as line spectral estimation, the sparsifying dictionary is usually characterized by a set of unknown parameters in a continuous domain. To apply the conventional compressed sensing technique to such applications, the continuous parameter space has to be discretized to a finite set of grid points, based on which a “nominal dictionary” is constructed for sparse signal recovery. Discretization, however, inevitably incurs errors since the true parameters do not necessarily lie on the discretized grid. This error, also referred to as grid mismatch, leads to deteriorated recovery performance. In this paper, we consider the line spectral estimation problem and propose an iterative reweighted method which jointly estimates the sparse signals and the unknown parameters associated with the true dictionary. The proposed algorithm is developed by iteratively decreasing a surrogate function majorizing a given log-sum objective function, leading to a gradual and interweaved iterative process to refine the unknown parameters and the sparse signal. A simple yet effective scheme is developed for adaptively updating the regularization parameter that controls the tradeoff between the sparsity of the solution and the data fitting error. Theoretical analysis is conducted to justify the proposed method. Simulation results show that the proposed algorithm achieves super resolution and outperforms other state-of-the-art methods in many cases of practical interest.) <|cite_end|> <|cite_start|> (Reference: Channel Estimation for Millimeter-Wave Massive MIMO with Hybrid Precoding over Frequency-Selective Fading Channels: Channel estimation for millimeter-wave (mmWave) massive MIMO with hybrid precoding is challenging, since the number of radio frequency (RF) chains is usually much smaller than that of antennas. To date, several channel estimation schemes have been proposed for mmWave massive MIMO over narrow-band channels, while practical mmWave channels exhibit the frequency-selective fading (FSF). To this end, this letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over FSF channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing (DCS)-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure (AoA/AoD). Simulation results verify that the good performance of the proposed solution.) <|cite_end|> <|cite_start|> (Reference: Channel estimation for hybrid architecture-based wideband millimeter wave systems: Hybrid analog and digital precoding allows millimeter wave (mmWave) systems to achieve both array and multiplexing gain. The design of the hybrid precoders and combiners, though, is usually based on the knowledge of the channel. Prior work on mmWave channel estimation with hybrid architectures focused on narrowband channels. Since mmWave systems will be wideband with frequency selectivity, it is vital to develop channel estimation solutions for hybrid architectures-based wideband mmWave systems. In this paper, we develop a sparse formulation and compressed sensing-based solutions for the wideband mmWave channel estimation problem for hybrid architectures. First, we leverage the sparse structure of the frequency-selective mmWave channels and formulate the channel estimation problem as a sparse recovery in both time and frequency domains. Then, we propose explicit channel estimation techniques for purely time or frequency domains and for combined time/frequency domains. Our solutions are suitable for both single carrier-frequency domain equalization and orthogonal frequency-division multiplexing systems. Simulation results show that the proposed solutions achieve good channel estimation quality, while requiring small training overhead. Leveraging the hybrid architecture at the transceivers gives further improvement in estimation error performance and achievable rates.) <|cite_end|> <|cite_start|> (Reference: Closed-Loop Sparse Channel Estimation for Wideband Millimeter-Wave Full-Dimensional MIMO Systems: This paper proposes a closed-loop sparse channel estimation (CE) scheme for wideband millimeter-wave hybrid full-dimensional multiple-input multiple-output and time division duplexing based systems, which exploits the channel sparsity in both angle and delay domains. At the downlink CE stage, random transmit precoder is designed at base station (BS) for channel sounding, and receive combiners at user devices (UDs) are designed to visualize hybrid array as a low-dimensional digital array for facilitating the multi-dimensional unitary ESPRIT (MDU-ESPRIT) algorithm to estimate respective angle-of-arrivals (AoAs). At the uplink CE stage, the estimated downlink AoAs, namely, uplink angle-of-departures (AoDs), are exploited to design multi-beam transmit precoder at UDs to enable BS to estimate the uplink AoAs, i.e., the downlink AoDs, and delays of different UDs using the MDU-ESPRIT algorithm based on the designed receive combiners at BS. Furthermore, a maximum likelihood approach is proposed to pair the channel parameters acquired at the two stages, and the path gains are then obtained using least squares estimator. According to spectrum estimation theory, our solution can acquire the super-resolution estimations of the AoAs/AoDs and delays of sparse multipath components with low training overhead. Simulation results verify the better CE performance and lower computational complexity of our solution over existing state-of-the-art approaches.) <|cite_end|>. For example, a distributed compressive sensing based multi-user channel estimation scheme was proposed in <|cite_start|> (Reference: Distributed Compressive CSIT Estimation and Feedback for FDD Multi-user Massive MIMO Systems: To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.) <|cite_end|>, where the joint angle-domain channel sparsity among different users was utilized. <|cite_start|> (Reference: Channel Estimation via Orthogonal Matching Pursuit for Hybrid MIMO Systems in Millimeter Wave Communications: We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures/arrivals (AoDs/AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs/AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.) <|cite_end|> proposed an orthogonal matching pursuit (OMP) based channel estimation scheme for massive MIMO systems with hybrid precoding structure by using the angle-domain channel sparsity. Besides, a joint channel estimation and tracking scheme was also proposed based on the framework of compressive sensing in <|cite_start|> (Reference: Compressive channel estimation and tracking for large arrays in mm-wave picocells: We propose and investigate a compressive architecture for estimation and tracking of sparse spatial channels in millimeter (mm) wave picocellular networks. The base stations are equipped with antenna arrays with a large number of elements (which can fit within compact form factors because of the small carrier wavelength) and employ radio frequency (RF) beamforming, so that standard least squares adaptation techniques (which require access to individual antenna elements) are not applicable. We focus on the downlink, and show that “compressive beacons,” transmitted using pseudorandom phase settings at the base station array, and compressively processed using pseudorandom phase settings at the mobile array, provide information sufficient for accurate estimation of the two-dimensional (2D) spatial frequencies associated with the directions of departure of the dominant rays from the base station, and the associated complex gains. This compressive approach is compatible with coarse phase-only control, and is based on a near-optimal sequential algorithm for frequency estimation which approaches the Cramér Rao Lower Bound. The algorithm exploits the geometric continuity of the channel across successive beaconing intervals to reduce the overhead to less than 1% even for very large (32 × 32) arrays. Compressive beaconing is essentially omnidirectional, and hence does not enjoy the SNR and spatial reuse benefits of beamforming obtained during data transmission. We therefore discuss system level design considerations for ensuring that the beacon SNR is sufficient for accurate channel estimation, and that inter-cell beacon interference is controlled by an appropriate reuse scheme.) <|cite_end|>. In addition, the channel estimation problem in lens-array based massive MIMO with a simple antenna switching network is investigated in <|cite_start|> (Reference: Compressive Sensing Based Channel Estimation for Millimeter-Wave Full-Dimensional MIMO with Lens-Array: Channel estimation (CE) for millimeter-wave (mmWave) lens-array suffers from prohibitive training overhead, whereas the state-of-the-art solutions require an extra complicated radio frequency phase shift network. By contrast, lens-array using antenna switching network (ASN) simplifies the hardware, but the associated CE is a challenging task due to the constraint imposed by ASN. This paper proposes a compressive sensing (CS)-based CE solution for full-dimensional (FD) lens-array, where the mmWave channel sparsity is exploited. Specifically, we first propose an approach of pilot training under the more severe haraware constraint imposed by ASN, and formulate the associated CE of lens-array as a CS problem. Then, a redundant dictionary is tailored for FD lens-array to combat the power leakage caused by the continuous angles of multipath components. Further, we design the baseband pilot signals to minimize the total mutual coherence of the measurement matrix based on CS theory for more reliable CE performance. Our solution provides a framework for applying CS techniques to lens-array using simple and practical ASN. Simulation results demonstrate the effectiveness of the proposed scheme.) <|cite_end|>, where a redundant dictionary and the corresponding compressive sensing based scheme are proposed.
However, these schemes in <|cite_start|> (Reference: Distributed Compressive CSIT Estimation and Feedback for FDD Multi-user Massive MIMO Systems: To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.) <|cite_end|> <|cite_start|> (Reference: Channel Estimation via Orthogonal Matching Pursuit for Hybrid MIMO Systems in Millimeter Wave Communications: We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures/arrivals (AoDs/AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs/AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.) <|cite_end|> <|cite_start|> (Reference: Compressive channel estimation and tracking for large arrays in mm-wave picocells: We propose and investigate a compressive architecture for estimation and tracking of sparse spatial channels in millimeter (mm) wave picocellular networks. The base stations are equipped with antenna arrays with a large number of elements (which can fit within compact form factors because of the small carrier wavelength) and employ radio frequency (RF) beamforming, so that standard least squares adaptation techniques (which require access to individual antenna elements) are not applicable. We focus on the downlink, and show that “compressive beacons,” transmitted using pseudorandom phase settings at the base station array, and compressively processed using pseudorandom phase settings at the mobile array, provide information sufficient for accurate estimation of the two-dimensional (2D) spatial frequencies associated with the directions of departure of the dominant rays from the base station, and the associated complex gains. This compressive approach is compatible with coarse phase-only control, and is based on a near-optimal sequential algorithm for frequency estimation which approaches the Cramér Rao Lower Bound. The algorithm exploits the geometric continuity of the channel across successive beaconing intervals to reduce the overhead to less than 1% even for very large (32 × 32) arrays. Compressive beaconing is essentially omnidirectional, and hence does not enjoy the SNR and spatial reuse benefits of beamforming obtained during data transmission. We therefore discuss system level design considerations for ensuring that the beacon SNR is sufficient for accurate channel estimation, and that inter-cell beacon interference is controlled by an appropriate reuse scheme.) <|cite_end|> <|cite_start|> (Reference: Compressive Sensing Based Channel Estimation for Millimeter-Wave Full-Dimensional MIMO with Lens-Array: Channel estimation (CE) for millimeter-wave (mmWave) lens-array suffers from prohibitive training overhead, whereas the state-of-the-art solutions require an extra complicated radio frequency phase shift network. By contrast, lens-array using antenna switching network (ASN) simplifies the hardware, but the associated CE is a challenging task due to the constraint imposed by ASN. This paper proposes a compressive sensing (CS)-based CE solution for full-dimensional (FD) lens-array, where the mmWave channel sparsity is exploited. Specifically, we first propose an approach of pilot training under the more severe haraware constraint imposed by ASN, and formulate the associated CE of lens-array as a CS problem. Then, a redundant dictionary is tailored for FD lens-array to combat the power leakage caused by the continuous angles of multipath components. Further, we design the baseband pilot signals to minimize the total mutual coherence of the measurement matrix based on CS theory for more reliable CE performance. Our solution provides a framework for applying CS techniques to lens-array using simple and practical ASN. Simulation results demonstrate the effectiveness of the proposed scheme.) <|cite_end|> were designed for narrowband systems. Although these narrowband schemes can be extended in wideband systems, carrying out narrowband schemes subcarrier by subcarrier will result in high complexity due to a large number of subcarriers, especially in wideband THz massive MIMO systems. To realize efficient wideband channel estimation, wideband channel estimation schemes have been proposed for millimeter-wave massive MIMO systems <|cite_start|> (Reference: Channel Estimation for Millimeter-Wave Massive MIMO with Hybrid Precoding over Frequency-Selective Fading Channels: Channel estimation for millimeter-wave (mmWave) massive MIMO with hybrid precoding is challenging, since the number of radio frequency (RF) chains is usually much smaller than that of antennas. To date, several channel estimation schemes have been proposed for mmWave massive MIMO over narrow-band channels, while practical mmWave channels exhibit the frequency-selective fading (FSF). To this end, this letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over FSF channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing (DCS)-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure (AoA/AoD). Simulation results verify that the good performance of the proposed solution.) <|cite_end|> <|cite_start|> (Reference: Channel estimation for hybrid architecture-based wideband millimeter wave systems: Hybrid analog and digital precoding allows millimeter wave (mmWave) systems to achieve both array and multiplexing gain. The design of the hybrid precoders and combiners, though, is usually based on the knowledge of the channel. Prior work on mmWave channel estimation with hybrid architectures focused on narrowband channels. Since mmWave systems will be wideband with frequency selectivity, it is vital to develop channel estimation solutions for hybrid architectures-based wideband mmWave systems. In this paper, we develop a sparse formulation and compressed sensing-based solutions for the wideband mmWave channel estimation problem for hybrid architectures. First, we leverage the sparse structure of the frequency-selective mmWave channels and formulate the channel estimation problem as a sparse recovery in both time and frequency domains. Then, we propose explicit channel estimation techniques for purely time or frequency domains and for combined time/frequency domains. Our solutions are suitable for both single carrier-frequency domain equalization and orthogonal frequency-division multiplexing systems. Simulation results show that the proposed solutions achieve good channel estimation quality, while requiring small training overhead. Leveraging the hybrid architecture at the transceivers gives further improvement in estimation error performance and achievable rates.) <|cite_end|>. In particular, <|cite_start|> (Reference: Channel Estimation for Millimeter-Wave Massive MIMO with Hybrid Precoding over Frequency-Selective Fading Channels: Channel estimation for millimeter-wave (mmWave) massive MIMO with hybrid precoding is challenging, since the number of radio frequency (RF) chains is usually much smaller than that of antennas. To date, several channel estimation schemes have been proposed for mmWave massive MIMO over narrow-band channels, while practical mmWave channels exhibit the frequency-selective fading (FSF). To this end, this letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over FSF channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing (DCS)-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure (AoA/AoD). Simulation results verify that the good performance of the proposed solution.) <|cite_end|> proposed a simultaneous orthogonal matching pursuit (SOMP) based scheme, where channels at different subcarriers were jointly estimated based on the assumption of common sparse channel support (i.e., the sparse channel supports at different subcarriers are the same). Besides, an OMP based wideband channel estimation scheme was proposed in <|cite_start|> (Reference: Channel estimation for hybrid architecture-based wideband millimeter wave systems: Hybrid analog and digital precoding allows millimeter wave (mmWave) systems to achieve both array and multiplexing gain. The design of the hybrid precoders and combiners, though, is usually based on the knowledge of the channel. Prior work on mmWave channel estimation with hybrid architectures focused on narrowband channels. Since mmWave systems will be wideband with frequency selectivity, it is vital to develop channel estimation solutions for hybrid architectures-based wideband mmWave systems. In this paper, we develop a sparse formulation and compressed sensing-based solutions for the wideband mmWave channel estimation problem for hybrid architectures. First, we leverage the sparse structure of the frequency-selective mmWave channels and formulate the channel estimation problem as a sparse recovery in both time and frequency domains. Then, we propose explicit channel estimation techniques for purely time or frequency domains and for combined time/frequency domains. Our solutions are suitable for both single carrier-frequency domain equalization and orthogonal frequency-division multiplexing systems. Simulation results show that the proposed solutions achieve good channel estimation quality, while requiring small training overhead. Leveraging the hybrid architecture at the transceivers gives further improvement in estimation error performance and achievable rates.) <|cite_end|>, where the sparse channel supports at some subcarriers were independently estimated using the classical OMP algorithm, and then the wideband channel was recovered based on the common sparse channel support created by the already obtained sparse channel supports. Furthermore, <|cite_start|> (Reference: Closed-Loop Sparse Channel Estimation for Wideband Millimeter-Wave Full-Dimensional MIMO Systems: This paper proposes a closed-loop sparse channel estimation (CE) scheme for wideband millimeter-wave hybrid full-dimensional multiple-input multiple-output and time division duplexing based systems, which exploits the channel sparsity in both angle and delay domains. At the downlink CE stage, random transmit precoder is designed at base station (BS) for channel sounding, and receive combiners at user devices (UDs) are designed to visualize hybrid array as a low-dimensional digital array for facilitating the multi-dimensional unitary ESPRIT (MDU-ESPRIT) algorithm to estimate respective angle-of-arrivals (AoAs). At the uplink CE stage, the estimated downlink AoAs, namely, uplink angle-of-departures (AoDs), are exploited to design multi-beam transmit precoder at UDs to enable BS to estimate the uplink AoAs, i.e., the downlink AoDs, and delays of different UDs using the MDU-ESPRIT algorithm based on the designed receive combiners at BS. Furthermore, a maximum likelihood approach is proposed to pair the channel parameters acquired at the two stages, and the path gains are then obtained using least squares estimator. According to spectrum estimation theory, our solution can acquire the super-resolution estimations of the AoAs/AoDs and delays of sparse multipath components with low training overhead. Simulation results verify the better CE performance and lower computational complexity of our solution over existing state-of-the-art approaches.) <|cite_end|> proposed a close-loop sparse channel estimation solution for multi-user massive MIMO systems. Unfortunately, the ideal assumption of common sparse channel support in the above two schemes is not practical for THz systems due to the beam split effect <|cite_start|> (Reference: {Delay-Phase Precoding for THz Massive MIMO with Beam Split: Benefiting from tens of GHz bandwidth, Terahertz (THz) communications has been considered as one of the promising technologies for the future 6G wireless communications. To compensate the serious attenuation in THz band and avoid huge power consumption, massive multiple input multiple output (MIMO) with hybrid precoding is widely considered. However, the traditional phase-shifter (PS) based hybrid precoding architecture cannot cope with the effect of beam split in THz communications, which means that the path components of THz channel split into different spatial directions at different subcarrier frequencies, leading serious array gain loss. In this paper, we first point out the seriousness of beam split effect in THz massive MIMO by analyzing the array gain loss caused by the beam split effect. To compensate this array gain loss, we propose a new hybrid precoding architecture called delay-phase precoding (DPP). In the proposed DPP, a time delay (TD) network is introduced between radio- frequency chains and the traditional PS network, which converts phase-controlled analog precoding into delay-phase controlled analog precoding. When carrying out precoding, the time delays in the TD network are dedicatedly designed to generate frequency-dependent beams which are aligned with the spatial directions over the whole bandwidth. Thanks to the joint control of delay and phase, the proposed DPP can significantly alleviate the beam split effect. Simulation results reveal that the proposed DPP can generate beams with the near- optimal array gain over the whole bandwidth, and achieve the near-optimal achievable rate performance.) <|cite_end|>. Specifically, the beam split effect can be seen as a serious situation of the widely known beam squint <|cite_start|> (Reference: Space-Time Block Coding-Based Beamforming for Beam Squint Compensation: In this letter, the beam squint problem, which causes significant variations in radiated beam gain over frequencies in a millimeter wave communication system, is investigated. A constant modulus beamformer design, which is formulated to maximize the expected average beam gain within the bandwidth with limited variation over frequencies within the bandwidth, is proposed. A semidefinite relaxation method is developed to solve the optimization problem under the constant modulus constraints. Depending on the eigenvalues of the optimal solution, either direct beamforming or transmit diversity-based beamforming is employed for data transmissions. Through numerical results, the proposed transmission scheme can compensate for beam squint effectively and improve system throughput. Overall, a transmission scheme for beam squint compensation in wideband wireless communication systems is provided.) <|cite_end|>. It means because of the wide bandwidth and a large number of antennas in THz massive MIMO systems, the spatial channel directions at different subcarriers becomes separated from each other in the angle-domain, i.e, locate at different angle-domain samples. The beam split effect will induce frequency-dependent sparse channel supports at different subcarriers. Consequently, the assumption of common sparse channel support does not hold, which means the existing schemes for millimeter-wave massive MIMO <|cite_start|> (Reference: Channel Estimation for Millimeter-Wave Massive MIMO with Hybrid Precoding over Frequency-Selective Fading Channels: Channel estimation for millimeter-wave (mmWave) massive MIMO with hybrid precoding is challenging, since the number of radio frequency (RF) chains is usually much smaller than that of antennas. To date, several channel estimation schemes have been proposed for mmWave massive MIMO over narrow-band channels, while practical mmWave channels exhibit the frequency-selective fading (FSF). To this end, this letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over FSF channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing (DCS)-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure (AoA/AoD). Simulation results verify that the good performance of the proposed solution.) <|cite_end|> <|cite_start|> (Reference: Channel estimation for hybrid architecture-based wideband millimeter wave systems: Hybrid analog and digital precoding allows millimeter wave (mmWave) systems to achieve both array and multiplexing gain. The design of the hybrid precoders and combiners, though, is usually based on the knowledge of the channel. Prior work on mmWave channel estimation with hybrid architectures focused on narrowband channels. Since mmWave systems will be wideband with frequency selectivity, it is vital to develop channel estimation solutions for hybrid architectures-based wideband mmWave systems. In this paper, we develop a sparse formulation and compressed sensing-based solutions for the wideband mmWave channel estimation problem for hybrid architectures. First, we leverage the sparse structure of the frequency-selective mmWave channels and formulate the channel estimation problem as a sparse recovery in both time and frequency domains. Then, we propose explicit channel estimation techniques for purely time or frequency domains and for combined time/frequency domains. Our solutions are suitable for both single carrier-frequency domain equalization and orthogonal frequency-division multiplexing systems. Simulation results show that the proposed solutions achieve good channel estimation quality, while requiring small training overhead. Leveraging the hybrid architecture at the transceivers gives further improvement in estimation error performance and achievable rates.) <|cite_end|> will suffer from severe performance degradation in wideband THz massive MIMO systems. Although several channel estimation schemes for THz massive MIMO has been recently proposed, such as the low-rank matrix reconstruction based scheme <|cite_start|> (Reference: Estimation of wideband dynamic mmWave and THz channels for 5G systems and beyond: Millimeter wave (mmWave) wideband channels in a multiple-input multiple-output (MIMO) transmission are described by a sparse set of impulse responses in the angle-delay, or space-time (ST), domain. These characteristics will be even more prominent in the THz band used in future systems. We consider two approaches for channel estimation: compressed-sensing (CS), exploiting the sparsity in the angular/delay domain, and low-rank (LR), exploiting the algebraic structure of channel matrix. Both approaches share several commonalities, and this paper provides for the first time i) a comparison of the two approaches, and ii) new versions of CS and LR methods that significantly improve performance in terms of mean squared error (MSE), computational complexity, and latency. We derive the asymptotic MSE bound for any estimator of the ST-MIMO multipath channels with invariant angles/delays and time-varying fading, with unknown angle/delay diversity order: the bound also accounts for the degradation introduced by sub-optimal separable channel models. We will show that in the considered scenarios both CS and LR approaches attain the bound. Our performance assessment over ideal and $3^{rd}$ generation partnership project (3GPP) channel models, suitable for the fifth-generation (5G) and beyond of cellular networks, shows the trade-off obtained by the methods over various metrics: i) CS methods are converging faster than the LR methods, both attaining the asymptotic MSE bound; ii) the CS methods depend on the array manifold, while LR methods are independent of the array calibration; iii) CS solutions are more complex than LR solutions.) <|cite_end|> and the joint activity detection and channel estimation scheme <|cite_start|> (Reference: Joint Activity Detection and Channel Estimation for mmW/THz Wideband Massive Access: Millimeter-wave/Terahertz (mmW/THz) communications have shown great potential for wideband massive access in next-generation cellular internet of things (IoT) networks. To decrease the length of pilot sequences and the computational complexity in wideband massive access, this paper proposes a novel joint activity detection and channel estimation (JADCE) algorithm. Specifically, after formulating JADCE as a problem of recovering a simultaneously sparse-group and low rank matrix according to the characteristics of mmW/THz channel, we prove that jointly imposing $l_1$ norm and low rank on such a matrix can achieve a robust recovery under sufficient conditions, and verify that the number of measurements derived for the mmW/THz wideband massive access system is significantly smaller than currently known measurements bound derived for the conventional simultaneously sparse and low-rank recovery. Furthermore, we propose a multi-rank aware method by exploiting the quotient geometry of product of complex rank-$L$ matrices with the number of scattering clusters $L$. Theoretical analysis and simulation results confirm the superiority of the proposed algorithm in terms of computational complexity, detection error rate, and channel estimation accuracy.) <|cite_end|>, they have not considered the frequency-dependent sparse channel support either. Hence, to the best of our knowledge, the wideband channel estimation in THz massive MIMO systems has not been well addressed in the literature.
\subsection{Our contributions}
In this paper, we propose an accurate beam split pattern detection based wideband channel estimation scheme in THz massive MIMO systems. The specific contributions of this paper can be summarized as follows.
\begin{itemize}
\item We first analyze the angle-domain sparse structure of the wideband THz channel by considering the beam split effect. We prove that a series of index sets have the one-to-one match to different physical channel directions. These index sets are defined as beam split patterns, each of which is corresponding to a specific physical channel direction. By utilizing the one-to-one match between the physical channel direction and the beam split pattern, the physical channel direction can be accurately estimated.
\item Based on the proof above, we propose a beam split pattern detection based wideband channel estimation scheme. For each channel path component, the physical channel direction is firstly estimated by exploiting the beam split pattern. Then, the sparse channel supports at different subcarriers are determined by using a support detection window. This support detection window is generated by expanding the beam split pattern, which is corresponding to the already obtained physical channel direction. The above procedure will be repeated path by path until all path components are considered. Finally, the wideband channel can be recovered by only calculating elements on the total sparse channel support containing sparse channel supports for different path components. Thanks to the one-to-one match between the physical channel direction and the beam split pattern, the proposed scheme can precisely estimate the physical channel directions and corresponding sparse channel supports.
\item The physical channel direction estimation accuracy of the proposed scheme is analyzed, and it shows that the physical channel direction can be precisely estimated with a probability approaching $1$. Extensive simulation results verify this analysis, and illustrate that the proposed beam split pattern detection based wideband channel estimation scheme can realize more accurate channel estimation than existing schemes.
\end{itemize}
\subsection{Organization and notation}
The remainder of this paper is organized as follows. In Section \ref{Sys}, the system model of a multi-user wideband THz massive MIMO system is introduced, and the channel estimation problem in this system is then formulated. In Section \ref{BSPD}, we first define the beam split pattern and prove the one-to-one match between the physical channel direction and the beam split pattern. Then, a beam split pattern based wideband channel estimation scheme is proposed, together with the corresponding performance and complexity analysis. Section \ref{Sim} illustrates the simulation results (Simulation codes are provided to reproduce the results presented in this paper: http://oa.ee.tsinghua.edu.cn/dailinglong/publications\\/publications.html.). Finally, conclusions are drawn in Section \ref{Con}.
\emph{Notation:} $(\cdot)^{T}$, $(\cdot)^{H}$, $(\cdot)^{\dagger}$, $\|\cdot\|_\mathrm{F}$, and $\|\cdot\|_{k}$ denote the transpose, conjugate transpose, pseudo-inverse, Frobenius norm, and $k$-norm of a matrix, respectively; $|\cdot|$ denotes
the absolute operator; $\mathbf{H}(i,j)$ denotes the element of the matrix $\mathbf{H}$ at the $i$-th row and the $j$-th column; If set $\Xi=\cup_{i}\{(a_{i},b_{i})\}$, $\mathbf{H}(\Xi)$ denotes the vector composed of elements $\mathbf{H}(a_{i},b_{i})$; $\mathbf{I}_{N}$ represents the identity matrix of size $N\times N$. <|paper_end|> | [
"<|reference_start|> Codebook Design for Millimeter-Wave Channel Estimation with Hybrid Precoding Structure: In this paper, we study hierarchical codebook design for channel estimation in millimeter-wave (mmWave) communications with a hybrid precoding structure. Due to the limited saturation power of mmWave power amplifier (PA), we take the per-antenna power constraint (PAPC) into consideration. We first propose a metric, i.e., generalized detection probability (GDP), to evaluate the quality of \\emph{an arbitrary codeword}. This metric not only enables an optimization approach for mmWave codebook design, but also can be used to compare the performance of two different codewords/codebooks. To the best of our knowledge, GDP is the first metric particularly for mmWave codebook design for channel estimation. We then propose an approach to design a hierarchical codebook exploiting BeaM Widening with Multi-RF-chain Sub-array technique (BMW-MS). To obtain crucial parameters of BMW-MS, we provide two solutions, namely a low-complexity search (LCS) solution to optimize the GDP metric and a closed-form (CF) solution to pursue a flat beam pattern. Performance comparisons show that BMW-MS/LCS and BMW-MS/CF achieve very close performances, and they outperform the existing alternatives under the PAPC. <|reference_end|>",
"<|reference_start|> Compressive channel estimation and tracking for large arrays in mm-wave picocells: We propose and investigate a compressive architecture for estimation and tracking of sparse spatial channels in millimeter (mm) wave picocellular networks. The base stations are equipped with antenna arrays with a large number of elements (which can fit within compact form factors because of the small carrier wavelength) and employ radio frequency (RF) beamforming, so that standard least squares adaptation techniques (which require access to individual antenna elements) are not applicable. We focus on the downlink, and show that “compressive beacons,” transmitted using pseudorandom phase settings at the base station array, and compressively processed using pseudorandom phase settings at the mobile array, provide information sufficient for accurate estimation of the two-dimensional (2D) spatial frequencies associated with the directions of departure of the dominant rays from the base station, and the associated complex gains. This compressive approach is compatible with coarse phase-only control, and is based on a near-optimal sequential algorithm for frequency estimation which approaches the Cramér Rao Lower Bound. The algorithm exploits the geometric continuity of the channel across successive beaconing intervals to reduce the overhead to less than 1% even for very large (32 × 32) arrays. Compressive beaconing is essentially omnidirectional, and hence does not enjoy the SNR and spatial reuse benefits of beamforming obtained during data transmission. We therefore discuss system level design considerations for ensuring that the beacon SNR is sufficient for accurate channel estimation, and that inter-cell beacon interference is controlled by an appropriate reuse scheme. <|reference_end|>",
"<|reference_start|> {Delay-Phase Precoding for THz Massive MIMO with Beam Split: Benefiting from tens of GHz bandwidth, Terahertz (THz) communications has been considered as one of the promising technologies for the future 6G wireless communications. To compensate the serious attenuation in THz band and avoid huge power consumption, massive multiple input multiple output (MIMO) with hybrid precoding is widely considered. However, the traditional phase-shifter (PS) based hybrid precoding architecture cannot cope with the effect of beam split in THz communications, which means that the path components of THz channel split into different spatial directions at different subcarrier frequencies, leading serious array gain loss. In this paper, we first point out the seriousness of beam split effect in THz massive MIMO by analyzing the array gain loss caused by the beam split effect. To compensate this array gain loss, we propose a new hybrid precoding architecture called delay-phase precoding (DPP). In the proposed DPP, a time delay (TD) network is introduced between radio- frequency chains and the traditional PS network, which converts phase-controlled analog precoding into delay-phase controlled analog precoding. When carrying out precoding, the time delays in the TD network are dedicatedly designed to generate frequency-dependent beams which are aligned with the spatial directions over the whole bandwidth. Thanks to the joint control of delay and phase, the proposed DPP can significantly alleviate the beam split effect. Simulation results reveal that the proposed DPP can generate beams with the near- optimal array gain over the whole bandwidth, and achieve the near-optimal achievable rate performance. <|reference_end|>",
"<|reference_start|> Estimation of wideband dynamic mmWave and THz channels for 5G systems and beyond: Millimeter wave (mmWave) wideband channels in a multiple-input multiple-output (MIMO) transmission are described by a sparse set of impulse responses in the angle-delay, or space-time (ST), domain. These characteristics will be even more prominent in the THz band used in future systems. We consider two approaches for channel estimation: compressed-sensing (CS), exploiting the sparsity in the angular/delay domain, and low-rank (LR), exploiting the algebraic structure of channel matrix. Both approaches share several commonalities, and this paper provides for the first time i) a comparison of the two approaches, and ii) new versions of CS and LR methods that significantly improve performance in terms of mean squared error (MSE), computational complexity, and latency. We derive the asymptotic MSE bound for any estimator of the ST-MIMO multipath channels with invariant angles/delays and time-varying fading, with unknown angle/delay diversity order: the bound also accounts for the degradation introduced by sub-optimal separable channel models. We will show that in the considered scenarios both CS and LR approaches attain the bound. Our performance assessment over ideal and $3^{rd}$ generation partnership project (3GPP) channel models, suitable for the fifth-generation (5G) and beyond of cellular networks, shows the trade-off obtained by the methods over various metrics: i) CS methods are converging faster than the LR methods, both attaining the asymptotic MSE bound; ii) the CS methods depend on the array manifold, while LR methods are independent of the array calibration; iii) CS solutions are more complex than LR solutions. <|reference_end|>"
] | [
11,
31,
38,
42
] | {"<|multi_cite_1_1|>": "ss-721304", "<|multi_cite_1_2|>": "ss-776926", "<|multi_cite_1_3|>": "ss-1270288", "<|multi_cite_1_4|>": "ss-1325950", "<|multi_cite_1_5|>": "ss-1085806", "<|cite_2|>": "ss-721304", "<|cite_3|>": "ss-1325950", "<|cite_4|>": "arxiv-88803", "<|multi_cite_5_1|>": "arxiv-45656", "<|multi_cite_5_2|>": "arxiv-110021", "<|multi_cite_5_3|>": "ss-709181", "<|cite_6|>": "arxiv-93645", "<|multi_cite_7_1|>": "arxiv-45656", "<|multi_cite_7_2|>": "arxiv-110021", "<|multi_cite_7_3|>": "ss-709181", "<|cite_8|>": "arxiv-47438", "<|cite_9|>": "ss-1035917", "<|multi_cite_10_1|>": "arxiv-60731", "<|multi_cite_10_2|>": "ss-1412898", "<|multi_cite_10_3|>": "ss-1290673", "<|multi_cite_10_4|>": "arxiv-240618", "<|multi_cite_10_5|>": "ss-709182", "<|multi_cite_10_6|>": "arxiv-95707", "<|multi_cite_10_7|>": "ss-1225949", "<|multi_cite_10_8|>": "arxiv-194094", "<|cite_11|>": "arxiv-60731", "<|cite_12|>": "ss-1412898", "<|cite_13|>": "ss-1290673", "<|cite_14|>": "arxiv-240618", "<|multi_cite_15_1|>": "arxiv-60731", "<|multi_cite_15_2|>": "ss-1412898", "<|multi_cite_15_3|>": "ss-1290673", "<|multi_cite_15_4|>": "arxiv-240618", "<|multi_cite_16_1|>": "arxiv-95707", "<|multi_cite_16_2|>": "ss-1225949", "<|cite_17|>": "arxiv-95707", "<|cite_18|>": "ss-1225949", "<|cite_19|>": "arxiv-194094", "<|cite_20|>": "ss-1280107", "<|cite_21|>": "ss-2346768", "<|multi_cite_22_1|>": "arxiv-95707", "<|multi_cite_22_2|>": "ss-1225949", "<|cite_23|>": "ss-709183", "<|cite_24|>": "arxiv-245193"} |
2208.03270 | <|paper_start|> Title: Learning New Skills after Deployment: Improving open-domain internet-driven dialogue with human feedback
Abstract: Learning New Skills after Deployment: Improving open-domain internet-driven dialogue with human feedback: Frozen models trained to mimic static datasets can never improve their performance. Models that can employ internet-retrieval for up-to-date information and obtain feedback from humans during deployment provide the promise of both adapting to new information, and improving their performance. In this work we study how to improve internet-driven conversational skills in such a learning framework. We collect deployment data, which we make publicly available, of human interactions, and collect various types of human feedback -- including binary quality measurements, free-form text feedback, and fine-grained reasons for failure. We then study various algorithms for improving from such feedback, including standard supervised learning, rejection sampling, model-guiding and reward-based learning, in order to make recommendations on which type of feedback and algorithms work best. We find the recently introduced Director model (Arora et al., '22) shows significant improvements over other existing approaches.
Introduction
Large language models employed as dialogue agents are primarily trained on human-written documents and human-human conversations collected from the web for pre-training <|cite_start|> (Reference: Unsupervised Cross-lingual Representation Learning at Scale: This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.) <|cite_end|> <|cite_start|> (Reference: The Pushshift Reddit Dataset: Social media data has become crucial to the advancement of scientific understanding. However, even though it has become ubiquitous, just collecting large-scale social media data involves a high degree of engineering skill set and computational resources. In fact, research is often times gated by data engineering problems that must be overcome before analysis can proceed. This has resulted recognition of datasets as meaningful research contributions in and of themselves. Reddit, the so called "front page of the Internet," in particular has been the subject of numerous scientific studies. Although Reddit is relatively open to data acquisition compared to social media platforms like Facebook and Twitter, the technical barriers to acquisition still remain. Thus, Reddit's millions of subreddits, hundreds of millions of users, and hundreds of billions of comments are at the same time relatively accessible, but time consuming to collect and analyze systematically. In this paper, we present the Pushshift Reddit dataset. Pushshift is a social media data collection, analysis, and archiving platform that since 2015 has collected Reddit data and made it available to researchers. Pushshift's Reddit dataset is updated in real-time, and includes historical data back to Reddit's inception. In addition to monthly dumps, Pushshift provides computational tools to aid in searching, aggregating, and performing exploratory analysis on the entirety of the dataset. The Pushshift Reddit dataset makes it possible for social media researchers to reduce time spent in the data collection, cleaning, and storage phases of their projects.) <|cite_end|>,
and human-human crowdsourced conversations <|cite_start|> (Reference: Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills: Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent. Previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them. But rather than being specialized in one single quality, a good open-domain conversational agent should be able to seamlessly blend them all into one cohesive conversational flow. In this work, we investigate several ways to combine models trained towards isolated capabilities, ranging from simple model aggregation schemes that require minimal additional training, to various forms of multi-task training that encompass several skills at all training stages. We further propose a new dataset, BlendedSkillTalk, to analyze how these capabilities would mesh together in a natural conversation, and compare the performance of different architectures and training schemes. Our experiments show that multi-tasking over several tasks that focus on particular capabilities results in better blended conversation performance compared to models trained on a single skill, and that both unified or two-stage approaches perform well if they are constructed to avoid unwanted bias in skill selection or are fine-tuned on our new task.) <|cite_end|> for fine-tuning. The models are then used at inference time to conduct conversations with humans, with no further learning taking place <|cite_start|> (Reference: Towards a Human-like Open-Domain Chatbot: We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.) <|cite_end|> <|cite_start|> (Reference: Recipes for building an open-domain chatbot: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.) <|cite_end|>.
Human-model conversations -- which are never seen at training time --
can have a quite different distribution
to the original human-human training data used, and our current techniques can lose
performance due to lack of robustness to such deviations <|cite_start|> (Reference: On the Measure of Intelligence: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.) <|cite_end|>.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{images/diagram.pdf}
\caption{{\bf Using human feedback to improve open-domain internet-driven dialogue agents.} We compare various types of feedback (and corresponding learning algorithms) in this work, such as binary feedback (good/bad), free-form text or supervised responses (better suggestions) for different modules of the system.
\label{fig:diagram}
}
\end{figure}
In this work, we study learning from the feedback collected during deployment of models in human-model conversations.
Such a setting has the opportunity to learn from within-distribution data, both in terms of the input contexts, but also the responses required (targets). Not only can this mean improvement in skills that are similar to the pre-train and fine-tune data, but potentially the learning of completely new skills -- that are desired by users of the system.
We thus take existing state of the art internet-augmented models such as BlenderBot 2 <|cite_start|> (Reference: Internet-Augmented Dialogue Generation: The largest store of continually updating knowledge on our planet can be accessed via internet search. In this work we study giving access to this information to conversational agents. Large language models, even though they store an impressive amount of knowledge within their weights, are known to hallucinate facts when generating dialogue (Shuster et al., 2021); moreover, those facts are frozen in time at the point of model training. In contrast, we propose an approach that learns to generate an internet search query based on the context, and then conditions on the search results to finally generate a response, a method that can employ up-to-the-minute relevant information. We train and evaluate such models on a newly collected dataset of human-human conversations whereby one of the speakers is given access to internet search during knowledgedriven discussions in order to ground their responses. We find that search-query based access of the internet in conversation provides superior performance compared to existing approaches that either use no augmentation or FAISS-based retrieval (Lewis et al., 2020).) <|cite_end|> <|cite_start|> (Reference: Beyond Goldfish Memory: Long-Term Open-Domain Conversation: Despite recent improvements in open-domain dialogue models, state of the art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a human-human dataset consisting of multiple chat sessions whereby the speaking partners learn about each other's interests and discuss the things they have learnt from past sessions. We show how existing models trained on existing datasets perform poorly in this long-term conversation setting in both automatic and human evaluations, and we study long-context models that can perform much better. In particular, we find retrieval-augmented methods and methods with an ability to summarize and recall previous conversations outperform the standard encoder-decoder architectures currently considered state of the art.) <|cite_end|> and SeeKeR <|cite_start|> (Reference: Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion: Language models (LMs) have recently been shown to generate more factual responses by employing modularity (Zhou et al., 2021) in combination with retrieval (Adolphs et al., 2021). We extend the recent approach of Adolphs et al. (2021) to include internet search as a module. Our SeeKeR (Search engine->Knowledge->Response) method thus applies a single LM to three modular tasks in succession: search, generating knowledge, and generating a final response. We show that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 (Chen et al., 2021) on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness. SeeKeR applied to topical prompt completions as a standard language model outperforms GPT2 (Radford et al., 2019) and GPT3 (Brown et al., 2020) in terms of factuality and topicality, despite GPT3 being a vastly larger model. Our code and models are made publicly available.) <|cite_end|>, deploy them to human crowdworkers, and experiment with various methods to learn from such interactions.
We thus first ask crowdworkers what topic and task they would like to talk about, in order to collect in-domain data, and then collect conversations involving these
skills. During the conversations we collect various kinds of human feedback, including binary feedback (good/bad), free-form conversational feedback, and the type of failure (search query-based, results-based, or final response-based), as well as suggestions for improvements (see \autoref{fig:diagram}).
We then explore a variety of methods for learning from feedback, and compare them in detailed experiments. In particular, we compare supervised learning methods, rejection sampling, model guiding and reward-based learning. Our findings are:
\begin{itemize}
\item Taking advantage of modular feedback (feedback about particular errors from modules of the model, such as the search engine component) outperforms feedback about just the final response.
\item Textual and binary feedback are also very useful signals, but not as much as modular feedback.
\item The recently introduced {\sc Director} method <|cite_start|> (Reference: DIRECTOR: Generator-Classifiers For Supervised Language Modeling: Current language models achieve low perplexity but their resulting generations still suffer from toxic responses, repetitiveness and contradictions. The standard language modeling setup fails to address these issues. In this paper, we introduce a new architecture, {\sc Director}, that consists of a unified generator-classifier with both a language modeling and a classification head for each output token. Training is conducted jointly using both standard language modeling data, and data labeled with desirable and undesirable sequences. Experiments in several settings show that the model has competitive training and decoding speed compared to standard language models while yielding superior results, alleviating known issues while maintaining generation quality. It also outperforms existing model guiding approaches in terms of both accuracy and efficiency.) <|cite_end|>, when learning from binary feedback, works better than reranking or reward-based learning.
\item Combining multiple types of feedback, such as modular and binary feedback with {\sc Director} provides the best results we obtained.
\item Continual learning, whereby we retrain models on the feedback from previous rounds of deployment, improves results even further.
\item Despite collecting feedback from smaller (3B parameter) models, the data collection is useful for improving much larger (175B parameter) models.
\end{itemize}
We release the collected data and feedback, the models, and make the code publicly available for this work\footnote{\url{https://parl.ai/projects/fits}}.
Related Work
There are a number of existing methods for collecting human feedback from human-model conversations. Deployed models can be improved in symmetric conversations conducted between models and humans during deployment by learning to mimic human conversationalists, as shown in the LIGHT dialogue game <|cite_start|> (Reference: Deploying Lifelong Open-Domain Dialogue Learning: Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that become more useful as they interact with people. In this work, we build and deploy a role-playing game, whereby human players converse with learning agents situated in an open-domain fantasy world. We show that by training models on the conversations they have with humans in the game the models progressively improve, as measured by automatic metrics and online engagement scores. This learning is shown to be more efficient than crowdsourced data when applied to conversations with real users, as well as being far cheaper to collect.) <|cite_end|>. This is not directly applicable if the conversations are asymmetric, for example in the case of one speaker (human) who asks the questions, and the other (bot) who always answers, as there would be no human supervision of the answers. In the non-symmetric case, one can however try to make use of the textual response from humans when conversing with the bot, but alternative learning methods must then be used. <|cite_start|> (Reference: Learning through Dialogue Interactions by Asking Questions: A good dialogue agent should have the ability to interact with users by both responding to questions and by asking questions, and importantly to learn from both types of interaction. In this work, we explore this direction by designing a simulator and a set of synthetic tasks in the movie domain that allow such interactions between a learner and a teacher. We investigate how a learner can benefit from asking questions in both offline and online reinforcement learning settings, and demonstrate that the learner improves when asking questions. Finally, real experiments with Mechanical Turk validate the approach. Our work represents a first step in developing such end-to-end learned interactive dialogue agents.) <|cite_end|> studies models that learn how to ask questions in order to learn from the answers, while <|cite_start|> (Reference: Dialogue Learning With Human-In-The-Loop: An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.) <|cite_end|> learns from general textual feedback/comments, particularly in the case where the bot has produced a low quality response. Another approach is to learn a reward signal (positive or negative reaction) based on user textual responses, as shown in the ``self-feeding chatbot'' <|cite_start|> (Reference: Learning from Dialogue after Deployment: Feed Yourself, Chatbot!: The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user's responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot's dialogue abilities further. On the PersonaChat chit-chat dataset with over 131k training examples, we find that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.) <|cite_end|>. Finally, rather than using conversational feedback, one can use sophisticated web-based UIs to collect data, for example stack ranking potential responses <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|> <|cite_start|> (Reference: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.) <|cite_end|>.
Outside of the dialogue domain, there are numerous studies attempting to improve language skills from deployment, including never-ending-learning from language data <|cite_start|> (Reference: {Toward an Architecture for Never-Ending Language Learning: We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74% after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.) <|cite_end|>, learning for the web search task directly <|cite_start|> (Reference: Improving Web Search Ranking by Incorporating User
Behavior Information: We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.) <|cite_end|> or the Dynabench system which covers a number of NLP tasks <|cite_start|> (Reference: Dynabench: Rethinking Benchmarking in NLP: We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking. Dynabench runs in a web browser and supports human-and-model-in-the-loop dataset creation: annotators seek to create examples that a target model will misclassify, but that another person will not. In this paper, we argue that Dynabench addresses a critical need in our community: contemporary models quickly achieve outstanding performance on benchmark tasks but nonetheless fail on simple challenge examples and falter in real-world scenarios. With Dynabench, dataset creation, model development, and model assessment can directly inform each other, leading to more robust and informative benchmarks. We report on four initial NLP tasks, illustrating these concepts and highlighting the promise of the platform, and address potential objections to dynamic benchmarking as a new standard for the field.) <|cite_end|>. <|cite_start|> (Reference: WebGPT: Browser-assisted question-answering with human feedback: We fine-tune GPT-3 to answer long-form questions using a text-based web-browsing environment, which allows the model to search and navigate the web. By setting up the task so that it can be performed by humans, we are able to train models on the task using imitation learning, and then optimize answer quality with human feedback. To make human evaluation of factual accuracy easier, models must collect references while browsing in support of their answers. We train and evaluate our models on ELI5, a dataset of questions asked by Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior cloning, and then performing rejection sampling against a reward model trained to predict human preferences. This model's answers are preferred by humans 56% of the time to those of our human demonstrators, and 69% of the time to the highest-voted answer from Reddit.) <|cite_end|> also learns to use internet-augmentation for generation, like this work, but for question answering, not multi-turn dialogue.
\if 0
* continual learning
Human-model interaction data for machine learning, and for dialogue research in particular, is commonly collected via expert annotators or crowdworkers. While careful instructions can result in good quality feedback or labels to learn from, collection both involves significant monetary costs -- where annotators should be paid well above minimum wage -- and the pool of workers may be limited. An alternative approach is to deploy a system publicly, and collect feedback from organic users. The promise of this approach is that the distribution of data will more closely match those organic users' desires, rather than decided by the researchers themselves when creating datasets <|cite_start|> (Reference: Deploying Lifelong Open-Domain Dialogue Learning: Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that become more useful as they interact with people. In this work, we build and deploy a role-playing game, whereby human players converse with learning agents situated in an open-domain fantasy world. We show that by training models on the conversations they have with humans in the game the models progressively improve, as measured by automatic metrics and online engagement scores. This learning is shown to be more efficient than crowdsourced data when applied to conversations with real users, as well as being far cheaper to collect.) <|cite_end|>. Further, a continual deployment of such a system can then potentially keep improving over time <|cite_start|> (Reference: {Toward an Architecture for Never-Ending Language Learning: We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74% after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.) <|cite_end|> <|cite_start|> (Reference: Dynabench: Rethinking Benchmarking in NLP: We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking. Dynabench runs in a web browser and supports human-and-model-in-the-loop dataset creation: annotators seek to create examples that a target model will misclassify, but that another person will not. In this paper, we argue that Dynabench addresses a critical need in our community: contemporary models quickly achieve outstanding performance on benchmark tasks but nonetheless fail on simple challenge examples and falter in real-world scenarios. With Dynabench, dataset creation, model development, and model assessment can directly inform each other, leading to more robust and informative benchmarks. We report on four initial NLP tasks, illustrating these concepts and highlighting the promise of the platform, and address potential objections to dynamic benchmarking as a new standard for the field.) <|cite_end|> <|cite_start|> (Reference: Improving Web Search Ranking by Incorporating User
Behavior Information: We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.) <|cite_end|> <|cite_start|> (Reference: Deploying Lifelong Open-Domain Dialogue Learning: Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that become more useful as they interact with people. In this work, we build and deploy a role-playing game, whereby human players converse with learning agents situated in an open-domain fantasy world. We show that by training models on the conversations they have with humans in the game the models progressively improve, as measured by automatic metrics and online engagement scores. This learning is shown to be more efficient than crowdsourced data when applied to conversations with real users, as well as being far cheaper to collect.) <|cite_end|>, where <|cite_start|> (Reference: Learning from Dialogue after Deployment: Feed Yourself, Chatbot!: The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user's responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot's dialogue abilities further. On the PersonaChat chit-chat dataset with over 131k training examples, we find that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.) <|cite_end|> call this a ``self-feeding chatbot''.
\fi
\begin{table*}
\small
\center
\begin{tabular}{p{0.2\linewidth}|p{0.35\linewidth}|p{0.35\linewidth}}
Topic & Specific Task & Task Completion Description \\
\hline
Making healthy food & Find recipes on healthy foods & If the chatbot provided specific recipes on making healthy foods \\
\hline
I would like to learn about a type of pet & I would like to learn about some hypoallergenic breeds of dogs, specifically, small dogs. & If the chatbot could tell me some small dog breeds that are hypoallergenic, along with details about the breed's temperament, personality and any special requirements.\\
\hline
gravel driveway & choosing the correct gravel for your driveway & It would ask a variety of questions. It would ask the length of your driveway, the area you live in, and your price range. Then it would show you pictures of different types of gravel that fit the criteria. Lastly, it would provide broad steps on gravelling the driveway and estimated price ranges for project completion.\\
\hline
getting started with cycling & what do I need to do to get started with road cycling & The chatbot would tell me what kind of bicycle would be best for road cycling and the necessary accessories that a beginner needs.\\
\hline
Find child friendly places in a city & Find child friendly resorts in Nassau Bahamas & Pull up resorts in Nassau Bahamas, only show the resorts that are child friendly, give the star rating for each resort, show the child programs in the resort.\\
\end{tabular}
\caption{
A sample of the collected topics and task definitions. See
\autoref{tab:dataset} for statistics on the overall dataset.
\label{tab:task_examples}
}
\end{table*} <|paper_end|> | [
"<|reference_start|> Recipes for building an open-domain chatbot: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models. <|reference_end|>",
"<|reference_start|> DIRECTOR: Generator-Classifiers For Supervised Language Modeling: Current language models achieve low perplexity but their resulting generations still suffer from toxic responses, repetitiveness and contradictions. The standard language modeling setup fails to address these issues. In this paper, we introduce a new architecture, {\\sc Director}, that consists of a unified generator-classifier with both a language modeling and a classification head for each output token. Training is conducted jointly using both standard language modeling data, and data labeled with desirable and undesirable sequences. Experiments in several settings show that the model has competitive training and decoding speed compared to standard language models while yielding superior results, alleviating known issues while maintaining generation quality. It also outperforms existing model guiding approaches in terms of both accuracy and efficiency. <|reference_end|>",
"<|reference_start|> Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent. <|reference_end|>",
"<|reference_start|> {Toward an Architecture for Never-Ending Language Learning: We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74% after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent. <|reference_end|>"
] | [
4,
9,
14,
21
] | {"<|multi_cite_1_1|>": "arxiv-232680", "<|multi_cite_1_2|>": "arxiv-244568", "<|cite_2|>": "arxiv-260014", "<|multi_cite_3_1|>": "arxiv-245143", "<|multi_cite_3_2|>": "arxiv-262101", "<|multi_cite_4_1|>": "arxiv-232482", "<|multi_cite_5_1|>": "arxiv-355388", "<|multi_cite_5_2|>": "arxiv-355389", "<|cite_6|>": "arxiv-408157", "<|cite_7|>": "arxiv-427379", "<|cite_8|>": "arxiv-285263", "<|cite_19|>": "arxiv-112624", "<|cite_20|>": "arxiv-111353", "<|cite_9|>": "arxiv-187763", "<|multi_cite_10_1|>": "arxiv-403294", "<|multi_cite_10_2|>": "arxiv-412682", "<|cite_11|>": "ss-1110643", "<|cite_12|>": "ss-944482", "<|cite_13|>": "arxiv-337706", "<|cite_21|>": "arxiv-388217", "<|multi_cite_16_3|>": "arxiv-285263", "<|multi_cite_17_1|>": "ss-1110643", "<|multi_cite_17_2|>": "arxiv-337706", "<|multi_cite_17_3|>": "ss-944482", "<|multi_cite_17_6|>": "arxiv-285263", "<|cite_18|>": "arxiv-187763"} |
1607.00133 | <|paper_start|> Title: Deep Learning with Differential Privacy
Abstract: Deep Learning with Differential Privacy: Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Introduction
Recent progress in neural networks has led to impressive successes in
a wide range of applications, including image classification, language
representation, move selection for Go, and many more
(e.g., <|cite_start|> (Reference: Going Deeper with Convolutions: We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.) <|cite_end|> <|cite_start|> (Reference: Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification.: Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset.) <|cite_end|> <|cite_start|> (Reference: Grammar as a Foreign Language: Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation.) <|cite_end|> <|cite_start|> (Reference: Move Evaluation in Go Using Deep Convolutional Neural Networks: The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.) <|cite_end|> <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|>). These advances are enabled, in part, by the
availability of large and representative datasets for training
neural networks. These datasets are often crowdsourced, and may
contain sensitive information. Their use requires techniques that
meet the demands of the applications while offering principled and rigorous privacy guarantees.
In this paper, we combine
state-of-the-art machine learning methods with advanced
privacy-preserving mechanisms, training neural networks within a
modest (``single-digit'') privacy budget.
We treat models with non-convex objectives, several layers, and tens of thousands to millions of parameters. (In contrast,
previous work obtains strong results on convex models with smaller numbers
of parameters, or treats complex neural networks but with a large
privacy loss.)
For this purpose, we
develop new algorithmic techniques, a refined analysis of privacy
costs within the framework of differential privacy, and careful
implementation strategies:
\begin{enumerate}
\item We demonstrate that, by tracking detailed information (higher
moments) of the privacy loss, we can obtain much tighter estimates on
the overall privacy loss, both asymptotically and empirically.
\item We improve the computational efficiency of differentially
private training by introducing new techniques. These techniques
include efficient algorithms for computing gradients for individual
training examples, subdividing tasks into smaller batches to reduce memory footprint, and
applying differentially private principal projection at the input
layer.
\item We build on the machine learning framework
TensorFlow for training models with differential privacy.
We evaluate our approach on two standard image classification
tasks, MNIST and CIFAR-10. We chose these two tasks because they are based on public data\-sets and have a long record of serving as benchmarks in machine learning.
Our experience indicates that privacy protection for deep neural networks can be
achieved at a modest cost in software complexity, training
efficiency, and model quality.
\end{enumerate}
Machine learning systems often comprise elements that contribute to
protecting their training data. In particular, regularization
techniques, which aim to avoid overfitting to the examples used for
training, may hide details of those examples. On the other hand,
explaining the internal representations in deep neural networks is
notoriously difficult, and their large capacity entails that these
representations may potentially encode fine details of at least some
of the training data. In some cases, a determined adversary may be
able to extract parts of the training data. For example, Fredrikson et
al.~demonstrated a model-inversion attack that recovers images
from a facial recognition system <|cite_start|> (Reference: {Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures: Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by Fredrikson et al., adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to models. We experimentally show attacks that are able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and, in the other context, show how to recover recognizable images of people's faces given only their name and access to the ML model. We also initiate experimental exploration of natural countermeasures, investigating a privacy-aware decision tree training algorithm that is a simple variant of CART learning, as well as revealing only rounded confidence values. The lesson that emerges is that one can avoid these kinds of MI attacks with negligible degradation to utility.) <|cite_end|>.
While the model-inversion attack requires only ``black-box'' access to
a trained model (that is, interaction with the model via inputs and
outputs), we consider adversaries with additional capabilities, much
like Shokri and Shmatikov <|cite_start|> (Reference: Privacy-Preserving deep learning: Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extrajudicial surveillance. Many data owners-for example, medical institutions that may want to apply deep learning methods to clinical records-are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we present a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.) <|cite_end|>.
Our approach offers protection against a strong adversary with full knowledge of the
training mechanism and access to the model's parameters.
This protection is attractive, in particular, for applications of
machine learning on mobile phones, tablets, and other devices. Storing
models on-device enables power-efficient, low-latency inference, and
may contribute to privacy since inference does not require
communicating user data to a central server; on the other hand, we
must assume that the model parameters themselves may be exposed to
hostile inspection.
Furthermore, when we are concerned with preserving the privacy
of one record in the training data, we allow for the possibility
that the adversary controls some or even all of the rest of the training data.
In practice, this possibility cannot always be excluded, for example
when the data is crowdsourced.
The next section reviews background on deep learning and on differential privacy.
Sections~\ref{sec:approach} and \ref{sec:impl} explain our approach and implementation. Section~\ref{sec:results} describes our experimental results.
Section~\ref{sec:related} discusses related work, and Section~\ref{sec:conclusions} concludes.
Deferred proofs appear in the \full{Appendix}{full version of the paper~\protect{}}.
Related Work
In this section we briefly recall the definition of differential privacy, introduce the Gaussian mechanism and composition theorems, and overview basic principles of deep learning.
\subsection{Differential Privacy}
Differential privacy <|cite_start|> (Reference: Calibrating noise to sensitivity in private data
analysis: We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.) <|cite_end|> <|cite_start|> (Reference: A firm foundation for private data analysis: What does it mean to preserve privacy?) <|cite_end|> <|cite_start|> (Reference: The algorithmic foundations of Differential Privacy: The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.) <|cite_end|> constitutes a
strong standard for privacy guarantees for algorithms on aggregate
data\-bases. It is defined in terms of the
application-specific concept of adjacent databases. In our
experiments, for instance, each training dataset is a set of
image-label pairs; we say that two of these sets are adjacent if they
differ in a single entry, that is, if one image-label pair is
present in one set and absent in the other.
\begin{definition}A randomized mechanism $\M\colon \Domain\rightarrow\Range$ with domain $\Domain$ and range $\mathcal{R}$
satisfies $(\eps,\delta)$-differential privacy if for any two adjacent inputs $\D,\D'\in \Domain$ and for any subset of outputs $S\subseteq\Range$ it holds that
\[
\Pr[\M(\D)\in S]\leq e^{\eps}\Pr[\M(\D')\in S]+\delta.
\]
\end{definition}
The original definition of $\eps$-differential privacy does not
include the additive term $\delta$. We use the variant
introduced by Dwork et al. <|cite_start|> (Reference: Our Data, Ourselves: Privacy Via Distributed Noise Generation: ) <|cite_end|>, which allows for the
possibility that plain $\eps$-differential privacy is broken with
probability~$\delta$ (which is preferably smaller than $1/|d|$).
Differential privacy has several properties that
make it particularly useful in applications such as ours:
composability, group privacy, and robustness to auxiliary
information. Composability enables modular design of mechanisms: if
all the components of a mechanism are differentially private, then so
is their composition. Group privacy implies graceful degradation of
privacy guarantees if datasets contain correlated inputs, such as the
ones contributed by the same individual. Robustness to auxiliary
information means that privacy guarantees are not affected by any side
information available to the adversary.
A common paradigm for approximating a deterministic real-valued
function $f\colon \Domain\rightarrow\mathbb{R}$ with a
differentially private mechanism is via additive noise calibrated to
$f$'s \emph{sensitivity} $S_f$, which is defined as the maximum of the
absolute distance
$|f(\D)-f(\D')|$ where $\D$ and $\D'$ are adjacent inputs.
(The restriction to a real-valued function is intended to simplify this
review, but is not essential.)
For instance, the Gaussian noise mechanism is defined by
\[
\M(\D)\eqdef f(\D)+\calN(0, S_f^2\cdot \sigma^2),
\]
where $\calN(0, S_f^2\cdot \sigma^2)$ is the normal (Gaussian) distribution with mean 0 and standard deviation $S_f \sigma$.
A single application of the Gaussian mechanism to function $f$ of
sensitivity $S_f$ satisfies $(\eps, \delta)$-differential privacy if
$\delta\geq \frac45 \exp(-(\sigma\eps)^2/2)$ and $\eps<1$~\cite[Theorem
3.22]{DworkRoth14}. Note that this analysis of the mechanism can be
applied \emph{post hoc}, and, in particular, that there are infinitely
many $(\eps,\delta)$ pairs that satisfy this condition.
Differential privacy for repeated applications of additive-noise
mechanisms follows from the basic composition
theorem <|cite_start|> (Reference: Our Data, Ourselves: Privacy Via Distributed Noise Generation: ) <|cite_end|> <|cite_start|> (Reference: {Differential privacy and robust statistics: We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems.) <|cite_end|>, or from advanced composition theorems
and their
refinements <|cite_start|> (Reference: Boosting and differential privacy: Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved {\em privacy-preserving synopses} of an input database. These are data structures that yield, for a given set $\Q$ of queries over an input database, reasonably accurate estimates of the responses to every query in~$\Q$, even when the number of queries is much larger than the number of rows in the database. Given a {\em base synopsis generator} that takes a distribution on $\Q$ and produces a ``weak'' synopsis that yields ``good'' answers for a majority of the weight in $\Q$, our {\em Boosting for Queries} algorithm obtains a synopsis that is good for all of~$\Q$. We ensure privacy for the rows of the database, but the boosting is performed on the {\em queries}. We also provide the first synopsis generators for arbitrary sets of arbitrary low-sensitivity queries, {\it i.e.}, queries whose answers do not vary much under the addition or deletion of a single row. In the execution of our algorithm certain tasks, each incurring some privacy loss, are performed many times. To analyze the cumulative privacy loss, we obtain an $O(\eps^2)$ bound on the {\em expected} privacy loss from a single $\eps$-\dfp{} mechanism. Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides $\eps$-differential privacy or one of its relaxations, and each of which operates on (potentially) different, adaptively chosen, databases.) <|cite_end|> <|cite_start|> (Reference: The Composition Theorem for Differential Privacy: Sequential querying of differentially private mechanisms degrades the overall privacy level. In this paper, we answer the fundamental question of characterizing the level of overall privacy degradation as a function of the number of queries and the privacy levels maintained by each privatization mechanism. Our solution is complete: we prove an upper bound on the overall privacy level and construct a sequence of privatization mechanisms that achieves this bound. The key innovation is the introduction of an operational interpretation of differential privacy (involving hypothesis testing) and the use of new data processing inequalities. Our result improves over the state-of-the-art, and has immediate applications in several problems studied in the literature including differentially private multi-party computation.) <|cite_end|> <|cite_start|> (Reference: Concentrated Differential Privacy: We introduce Concentrated Differential Privacy, a relaxation of Differential Privacy enjoying better accuracy than both pure differential privacy and its popular "(epsilon,delta)" relaxation without compromising on cumulative privacy loss over multiple computations.) <|cite_end|> <|cite_start|> (Reference: Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds: "Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy.") <|cite_end|>. The task
of keeping track of the accumulated privacy loss in the course of
execution of a composite mechanism, and enforcing the applicable
privacy policy, can be performed by the \emph{privacy accountant},
introduced by McSherry <|cite_start|> (Reference: Privacy integrated queries: an extensible platform for privacy-preserving
data analysis: We report on the design and implementation of the Privacy Integrated Queries (PINQ) platform for privacy-preserving data analysis. PINQ provides analysts with a programming interface to unscrubbed data through a SQL-like language. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's unconditional structural guarantees require no trust placed in the expertise or diligence of the analysts, substantially broadening the scope for design and deployment of privacy-preserving data analysis, especially by non-experts.) <|cite_end|>.
The basic blueprint for designing a differentially private
additive-noise mechanism that implements a given functionality
consists of the following steps: approximating the functionality by a
sequential composition of bounded-sensitivity functions; choosing
parameters of additive noise; and performing privacy analysis of the
resulting mechanism. We follow this approach in
Section~\ref{sec:approach}.
\subsection{Deep Learning}
Deep neural networks, which are remarkably effective for many machine
learning tasks, define parameterized functions from inputs
to outputs as compositions of many layers of basic building blocks, such as affine transformations and simple nonlinear functions. Commonly used examples of the latter are sigmoids and rectified linear units (ReLUs). By varying parameters of these blocks, we can ``train'' such a parameterized function with the goal of fitting any given finite set of input/output examples.
More precisely, we define a loss
function $\calL$ that represents the penalty for mismatching the
training data. The loss $\calL(\btheta)$ on parameters $\btheta$ is
the average of the loss over the training examples $\{x_1, \ldots,
x_N\}$, so $\calL(\btheta) = \frac{1}{N}\sum_i \calL(\btheta, x_i)$.
Training consists in finding $\btheta$ that yields an acceptably small
loss, hopefully the smallest loss (though in practice we seldom expect to reach
an exact global minimum).
For complex networks, the loss function $\calL$ is usually non-convex
and difficult to minimize. In practice, the minimization is often done
by the mini-batch stochastic gradient descent (SGD) algorithm. In this
algorithm, at each step, one forms a batch $B$ of random examples and
computes $\g_B = 1/|B| \sum_{x\in B} \nabla_\btheta\calL(\btheta, x)$
as an estimation to the gradient $\nabla_\btheta\calL(\btheta)$. Then
$\btheta$ is updated following the gradient direction $-\g_B$ towards
a local minimum.
Several systems have been built to support the definition of neural
networks, to enable efficient training, and then to perform efficient
inference (execution for fixed parameters) <|cite_start|> (Reference: Lua---an extensible extension language: This paper describes Lua, a language for extending applications. Lua combines procedural features with powerful data description facilities, by using a simple, yet powerful, mechanism of tables . This mechanism implements the concepts of records, arrays and recursive data types (pointers), and adds some object‐oriented facilities, such as methods with dynamic dispatching.) <|cite_end|> <|cite_start|> (Reference: Torch7: A Matlab-like Environment for Machine Learning: Torch7 is a versatile numeric computing framework and machine learning library that extends Lua. Its goal is to provide a flexible environment to design and train learning machines. Flexibility is obtained via Lua, an extremely lightweight scripting language. High performance is obtained via efficient OpenMP/SSE and CUDA implementations of low-level numeric routines. Torch7 can easily be interfaced to third-party software thanks to Lua’s light interface.) <|cite_end|>. We base our work on
TensorFlow, an open-source dataflow engine released by
Google. TensorFlow allows the programmer to define
large computation graphs from basic operators, and to distribute their
execution across a heterogeneous distributed system. TensorFlow
automates the creation of the computation graphs for gradients; it
also makes it easy to batch computation. <|paper_end|> | [
"<|reference_start|> {Differential privacy and robust statistics: We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems. <|reference_end|>",
"<|reference_start|> Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds: \"Concentrated differential privacy\" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of \"approximate concentrated differential privacy.\" <|reference_end|>",
"<|reference_start|> Privacy integrated queries: an extensible platform for privacy-preserving\ndata analysis: We report on the design and implementation of the Privacy Integrated Queries (PINQ) platform for privacy-preserving data analysis. PINQ provides analysts with a programming interface to unscrubbed data through a SQL-like language. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's unconditional structural guarantees require no trust placed in the expertise or diligence of the analysts, substantially broadening the scope for design and deployment of privacy-preserving data analysis, especially by non-experts. <|reference_end|>",
"<|reference_start|> Lua---an extensible extension language: This paper describes Lua, a language for extending applications. Lua combines procedural features with powerful data description facilities, by using a simple, yet powerful, mechanism of tables . This mechanism implements the concepts of records, arrays and recursive data types (pointers), and adds some object‐oriented facilities, such as methods with dynamic dispatching. <|reference_end|>"
] | [
12,
16,
17,
18
] | {"<|multi_cite_1_1|>": "arxiv-66180", "<|multi_cite_1_2|>": "ss-1889156", "<|multi_cite_1_3|>": "arxiv-70752", "<|multi_cite_1_4|>": "arxiv-70550", "<|multi_cite_1_5|>": "ss-805362", "<|cite_3|>": "ss-1254557", "<|cite_4|>": "ss-898974", "<|multi_cite_6_1|>": "ss-1076001", "<|multi_cite_6_2|>": "ss-1372933", "<|multi_cite_6_3|>": "ss-767290", "<|cite_7|>": "ss-1328165", "<|multi_cite_8_1|>": "ss-1328165", "<|multi_cite_8_2|>": "ss-1282478", "<|multi_cite_9_1|>": "ss-772294", "<|multi_cite_9_2|>": "arxiv-52299", "<|multi_cite_9_3|>": "arxiv-93493", "<|multi_cite_9_4|>": "arxiv-97459", "<|cite_10|>": "ss-733573", "<|multi_cite_11_1|>": "ss-801829", "<|multi_cite_11_2|>": "ss-963079"} |
1504.05229 | <|paper_start|> Title: Poisson Matrix Recovery and Completion
Abstract: Poisson Matrix Recovery and Completion: We extend the theory of low-rank matrix recovery and completion to the case when Poisson observations for a linear combination or a subset of the entries of a matrix are available, which arises in various applications with count data. We consider the usual matrix recovery formulation through maximum likelihood with proper constraints on the matrix $M$ of size $d_1$-by-$d_2$, and establish theoretical upper and lower bounds on the recovery error. Our bounds for matrix completion are nearly optimal up to a factor on the order of $\mathcal{O}(\log(d_1 d_2))$. These bounds are obtained by combing techniques for compressed sensing for sparse vectors with Poisson noise and for analyzing low-rank matrices, as well as adapting the arguments used for one-bit matrix completion \cite{davenport20121} (although these two problems are different in nature) and the adaptation requires new techniques exploiting properties of the Poisson likelihood function and tackling the difficulties posed by the locally sub-Gaussian characteristic of the Poisson distribution. Our results highlight a few important distinctions of the Poisson case compared to the prior work including having to impose a minimum signal-to-noise requirement on each observed entry and a gap in the upper and lower bounds. We also develop a set of efficient iterative algorithms and demonstrate their good performance on synthetic examples and real data.
Introduction
Recovering a low-rank matrix $M$ with Poisson observations is a key problem that arises from various real-world applications with count data, such as nuclear medicine, low-dose x-ray imaging <|cite_start|> (Reference: Optical imaging and spectroscopy: Preface. Acknowledgments. 1. Past, present and future. 1.1 Three revolutions. 1.2 Computational imaging. 1.3 Overview. 1.4 The fourth revolution. Problems. 2. Geometric imaging. 2.1 Visibility. 2.2 Optical elements. 2.3 Focal imaging. 2.4 Imaging systems. 2.5 Pinhole and coded aperture imaging. 2.6 Projection tomography. 2.7 Reference structure tomography. Problems. 3. Analysis. 3.1 Analytical tools. 3.2 Fields and transformations. 3.3 Fourier analysis. 3.4 Transfer functions and filters. 3.5 The Fresnel transformation. 3.6 The Whittaker-Shannon sampling theorem. 3.7 Discrete analysis of linear transformations. 3.8 Multiscale sampling. 3.9 B-splines. 3.10 Wavelets. Problems. 4. Wave imaging. 4.1 Waves and fields. 4.2 Wave model for optical fields. 4.3 Wave propagation. 4.4 Diffraction. 4.5 Wave analysis of optical elements. 4.6 Wave propagation through thin lenses. 4.7 Fourier analysis of wave imaging. 4.8 Holography. Problems. 5. Detection. 5.1 The Optoelectronic interface. 5.2 Quantum mechanics of optical detection. 5.3 Optoelectronic detectors. 5.3.1 Photoconductive detectors. 5.3.2 Photodiodes. 5.4 Physical characteristics of optical detectors. 5.5 Noise. 5.6 Charge coupled devices. 5.7 Active pixel sensors. 5.8 Infrared focal plane arrays. Problems. 6. Coherence imaging. 6.1 Coherence and spectral fields. 6.2 Coherence propagation. 6.3 Measuring coherence. 6.4 Fourier analysis of coherence imaging. 6.5 Optical coherence tomography. 6.6 Modal analysis. 6.7 Radiometry. Problems. 7. Sampling. 7.1 Samples and pixels. 7.2 Image plane sampling on electronic detector arrays. 7.3 Color imaging. 7.4 Practical sampling models. 7.5 Generalized sampling. Problems. 8. Coding and inverse problems. 8.1 Coding taxonomy. 8.2 Pixel coding. 8.3 Convolutional coding. 8.4 Implicit coding. 8.5 Inverse problems. Problems. 9. Spectroscopy. 9.1 Spectral measurements. 9.2 Spatially dispersive spectroscopy. 9.3 Coded aperture spectroscopy. 9.4 Interferometric Spectroscopy. 9.5 Resonant spectroscopy. 9.6 Spectroscopic filters. 9.7 Tunable filters. 9.8 2D spectroscopy. Problems. 10. Computational imaging. 10.1 Imaging systems. 10.2 Depth of field. 10.3 Resolution. 10.4 Multiple aperture imaging. 10.5 Generalized sampling revisited. 10.6 Spectral imaging. Problems. References.) <|cite_end|>, network traffic analysis <|cite_start|> (Reference: Inference of poisson count processes using low-rank tensor data: A novel regularizer capturing the tensor rank is introduced in this paper as the key enabler for completion of three-way data arrays with missing entries. The novel regularized imputation approach induces sparsity in the factors of the tensor's PARAFAC decomposition, thus reducing its rank. The focus is on count processes which emerge in diverse applications ranging from genomics to computer and social networking. Based on Poisson count data, a maximum aposteriori (MAP) estimator is developed using the Kullback-Leibler divergence criterion. This probabilistic approach also facilitates incorporation of correlated priors regularizing the rank, while endowing the tensor imputation method with extra smoothing and prediction capabilities. Tests on simulated and real datasets corroborate the sparsifying regularization effect, and demonstrate recovery of 15% missing RNA-sequencing data with an inference error of -12dB.) <|cite_end|>, and call center data <|cite_start|> (Reference: Analysis of call centre arrival data using singular value decomposition: SUMMARY We consider the general problem of analysing and modelling call centre arrival data. A method is described for analysing such data using singular value decomposition (SVD). We illustrate that the outcome from the SVD can be used for data visualization, detection of anomalies (outliers), and extraction of significant features from noisy data. The SVD can also be employed as a data reduction tool. Its application usually results in a parsimonious representation of the original data without losing much information. We describe how one can use the reduced data for some further, more formal statistical analysis. For example, a shortterm forecasting model for call volumes is developed, which is multiplicative with a time series component that depends on day of the week. We report empirical results from applying the proposed method to some real data collected at a call centre of a large-scale U.S. financial organization. Some issues about forecasting call volumes are also discussed. Copyright # 2005 John Wiley & Sons, Ltd.) <|cite_end|>. There the observations are Poisson counts whose intensities are determined by the matrix, either through a subset of its entries or linear combinations of its entries.
Thus far much success has been achieved in solving the matrix completion and recovery problems using nuclear norm minimization, partly inspired by the theory of compressed sensing <|cite_start|> (Reference: {Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?: Suppose we are given a vector f in a class FsubeRopf<sup>N </sup>, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr<sub>2</sub>) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|<sub>(n)</sub>lesRmiddotn<sup>-1</sup>p/, where R>0 and p>0. Suppose that we take measurements y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang,k=1,...,K, where the X<sub>k</sub> are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction f<sup>t</sup>, defined as the solution to the constraints y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang with minimal lscr<sub>1</sub> norm, obeys parf-f<sup>#</sup>par<sub>lscr2</sub>lesC<sub>p </sub>middotRmiddot(K/logN)<sup>-r</sup>, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed) <|cite_end|> <|cite_start|> (Reference: {Compressed sensing: Signal recovery is a very practical and useful concept in both signal processing and communication area. Basically in compressed sensing, we are interested in compressing a signal, which is sparse in some domain and then, construct the original signal from the compressed one by convex optimization. This is very important to collect as less as measurements from the original signal while having the minimum error in the constructed signal.) <|cite_end|>. It has been shown that when $M$ is low rank, it can be recovered from observations of a subset or a linear combination of its entries (see, e.g. <|cite_start|> (Reference: Exact Matrix Completion via Convex Optimization: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.) <|cite_end|> <|cite_start|> (Reference: Matrix Completion from a Few Entries: Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn/|E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.) <|cite_end|> <|cite_start|> (Reference: The Power of Convex Relaxation: Near-Optimal Matrix Completion: This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n).) <|cite_end|> <|cite_start|> (Reference: Guaranteed Minimum-Rank Solutions of Linear Matrix equations via Nuclear Norm Minimization: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case.
In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large.
The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.) <|cite_end|> <|cite_start|> (Reference: A Simpler Approach to Matrix Completion: This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.) <|cite_end|> <|cite_start|> (Reference: A Singular Value Thresholding Algorithm for Matrix Completion: This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.) <|cite_end|> <|cite_start|> (Reference: Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix: This paper studies algorithms for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of classical PCA, and arises in a number of application domains, including image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, it can be exactly solved via a convex programming surrogate that combines nuclear norm minimization and `1-norm minimization. This paper develops and compares two complementary approaches for solving this convex program. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem. Both are several orders of magnitude faster than the previous state-of-the-art algorithm for this problem, which was based on iterative thresholding. Simulations demonstrate the performance improvement that can be obtained via these two algorithms, and clarify their relative merits.) <|cite_end|> <|cite_start|> (Reference: Spectral regularization algorithms for learning large incomplete Matrices: We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple ...) <|cite_end|>).
Earlier work on matrix completion typically assume that the observations are noiseless, i.e., we may directly observe a subset of entries of $M$. In the real world, however, the observations are noisy, which is the focus of the subsequent work <|cite_start|> (Reference: Matrix Completion from Noisy Entries: Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the `Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan et al.(2009), based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances.) <|cite_end|> <|cite_start|> (Reference: Matrix Completion With Noise: On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.) <|cite_end|> <|cite_start|> (Reference: Estimation of (near) low-rank matrices with noise and high-dimensional scaling: We study an instance of high-dimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ+ ∈ ℝk×p that is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider an M-estimator based on regularization by the trace or nuclear norm over matrices, and analyze its performance under high-dimensional scaling. We provide non-asymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low-rank matrices. We then illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low-rank matrices from random projections. Simulations show excellent agreement with the high-dimensional scaling of the error predicted by our theory.) <|cite_end|> <|cite_start|> (Reference: Restricted strong convexity and weighted matrix completion: Optimal bounds with noise: We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the "spikiness" and "low-rankness" of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an $M$-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with $\ell_q$-"balls" of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.) <|cite_end|> <|cite_start|> (Reference: Estimation of High-Dimensional low-rank Matrices: Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m x T-matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N. Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink toward a low-rank representation, we investigate penalized least squares estimators with a Schatten-p quasi-norm penalty term, p ≤ 1. We study these estimators under two possible assumptions—a modified version of the restricted isometry condition and a uniform bound on the ratio "empirical norm induced by the sampling operator/Frobenius norm." The main results are stated as nonasymptotic upper bounds on the prediction risk and on the Schatten-q risk of the estimators, where q ∈ [p, 2]. The rates that we obtain for the prediction risk are of the form rm/N (for m = T), up to logarithmic factors, where r is the rank of A. The particular examples of multi-task learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product, we derive bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.) <|cite_end|> <|cite_start|> (Reference: Error bounds for maximum likelihood matrix completion under sparse factor models: This paper examines a general class of matrix completion tasks where entry wise observations of the matrix are subject to random noise or corruption. Our particular focus here is on settings where the matrix to be estimated follows a sparse factor model, in the sense that it may be expressed as the product of two matrices, one of which is sparse. We analyze the performance of a sparsity-penalized maximum likelihood approach to such problems to provide a general-purpose estimation result applicable to any of a number of noise/corruption models, and describe its implications in two stylized scenarios - one characterized by additive Gaussian noise, and the other by highly-quantized one-bit observations. We also provide some supporting empirical evidence to validate our theoretical claims in the Gaussian setting.) <|cite_end|>, most of which consider a scenario when the observations are contaminated by Gaussian noise. The theory for low-rank matrix recovery under Poisson noise has been less developed. Moreover, the Poisson problems are quite different from their Gaussian counterpart, since under Poisson noise the variance of the noisy observations is proportional to the signal intensity. Moreover, instead of using $\ell_2$ error for data fit, we need to use a highly non-linear likelihood function.
Recently there has also been work that consider the more general noise models, including noisy 1-bit observations <|cite_start|> (Reference: 1-Bit Matrix Completion: In this paper we develop a theory of matrix completion for the extreme case of noisy 1-bit observations. Instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1-bit measurements, then we can eliminate the requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower bounds showing that these estimates are near-optimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1-bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better.) <|cite_end|>, which may be viewed as a case where the observations are Bernoulli random variables whose parameters depend on a underlying low-rank matrix; <|cite_start|> (Reference: Noisy Matrix Completion under Sparse Factor Models: This paper examines a general class of noisy matrix completion tasks where the goal is to estimate a matrix from observations obtained at a subset of its entries, each of which is subject to random noise or corruption. Our specific focus is on settings where the matrix to be estimated is well-approximated by a product of two (a priori unknown) matrices, one of which is sparse. Such structural models - referred to here as "sparse factor models" - have been widely used, for example, in subspace clustering applications, as well as in contemporary sparse modeling and dictionary learning tasks. Our main theoretical contributions are estimation error bounds for sparsity-regularized maximum likelihood estimators for problems of this form, which are applicable to a number of different observation noise or corruption models. Several specific implications are examined, including scenarios where observations are corrupted by additive Gaussian noise or additive heavier-tailed (Laplace) noise, Poisson-distributed observations, and highly-quantized (e.g., one-bit) observations. We also propose a simple algorithmic approach based on the alternating direction method of multipliers for these tasks, and provide experimental evidence to support our error analyses.) <|cite_end|> <|cite_start|> (Reference: Estimation error guarantees for Poisson denoising with sparse and structured dictionary models: Poisson processes are commonly used models for describing discrete arrival phenomena arising, for example, in photon-limited scenarios in low-light and infrared imaging, astronomy, and nuclear medicine applications. In this context, several recent efforts have evaluated Poisson denoising methods that utilize contemporary sparse modeling and dictionary learning techniques designed to exploit and leverage (local) shared structure in the images being estimated. This paper establishes a theoretical foundation for such procedures. Specifically, we formulate sparse and structured dictionary-based Poisson denoising methods as constrained maximum likelihood estimation strategies, and establish performance bounds for their mean-square estimation error using the framework of complexity penalized maximum likelihood analyses.) <|cite_end|> consider the case where {\it all} entries of the low-rank matrix are observed and the observations are Poisson counts of the entries of the underlying matrix, and an upper bound is established (without a lower bound). In the compressed sensing literature, there is a line of research for sparse signal recovery in the presence of Poisson noise <|cite_start|> (Reference: Compressed sensing performance bounds under Poisson noise: This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical $\ell_2$--$\ell_1$ minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.) <|cite_end|> <|cite_start|> (Reference: Performance bounds for expander-based compressed sensing in Poisson noise: This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a MAP algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.) <|cite_end|> <|cite_start|> (Reference: Minimax Optimal Rates for Poisson Inverse Problems with Physical Constraints: This paper considers fundamental limits for solving sparse inverse problems in the presence of Poisson noise with physical constraints. Such problems arise in a variety of applications, including photon-limited imaging systems based on compressed sensing (CS). Most prior theoretical results in CS and related inverse problems apply to idealized settings where the noise is independent identically distributed and do not account for signal-dependent noise and physical sensing constraints. Prior results on Poisson CS with signal-dependent noise and physical constraints provided upper bounds on mean-squared error (MSE) performance for a specific class of estimators. However, it was unknown whether those bounds were tight or if other estimators could achieve significantly better performance. This paper provides minimax lower bounds on MSE for sparse Poisson inverse problems under physical constraints. The lower bounds are complemented by minimax upper bounds which match the lower bounds for certain problem sizes and noise levels. The source of the mismatch between upper and lower bounds for other problem sizes and noise levels is discussed. The upper and lower bounds reveal that due to the interplay between the Poisson noise model, the sparsity constraint and the physical constraints: 1) the MSE upper bound does not depend on the sample size n other than to ensure the sensing matrix satisfies Restricted Isometry Property-like conditions and the intensity T of the input signal plays a critical role and 2) the MSE upper bound has two distinct regimes, corresponding to low and high intensities, and the transition point from the low-intensity to high-intensity regime depends on the sparsifying basis D. In the low-intensity regime, the MSE upper bound is independent of T while in the high-intensity regime, the MSE upper bound scales as (slog p/T), where s is the sparsity level, p is the number of pixels or parameters, and T is the signal intensity.) <|cite_end|> and the corresponding performance bounds. The recently developed SCOPT <|cite_start|> (Reference: A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions: We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic stepsize selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.) <|cite_end|> <|cite_start|> (Reference: Composite Self-Concordant Minimization: We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function, endowed with an easily computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on the smooth part. An important highlight of our work is a new set of analytic step-size selection and correction procedures based on the structure of the problem. We describe concrete algorithmic instances of our framework for several interesting applications and demonstrate them numerically on both synthetic and real data.) <|cite_end|> algorithm can also be used to solve the Poisson compressed sensing of sparse signals but may not be directly applied for Poisson matrix recovery.
In this paper, we extend the theory of low-rank matrix recovery to two related problems with Poisson observations: matrix recovery from compressive measurements, and matrix completion from observations of a subset of its entries. The matrix recovery problem from compressive measurements is formulated as a regularized maximum likelihood estimator with Poisson likelihood. We establish performance bounds by combining techniques for recovering sparse signals under Poisson noise <|cite_start|> (Reference: Compressed sensing performance bounds under Poisson noise: This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical $\ell_2$--$\ell_1$ minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.) <|cite_end|> and for establishing bounds in the case of low-rank matrices <|cite_start|> (Reference: Compressed sensing, sparse approximation, and low-rank matrix estimation: The importance of sparse signal structures has been recognized in a plethora of applications ranging from medical imaging to group disease testing to radar technology. It has been shown in practice that various signals of interest may be (approximately) sparsely modeled, and that sparse modeling is often beneficial, or even indispensable to signal recovery. Alongside an increase in applications, a rich theory of sparse and compressible signal recovery has recently been developed under the names compressed sensing (CS) and sparse approximation (SA). This revolutionary research has demonstrated that many signals can be recovered from severely undersampled measurements by taking advantage of their inherent low-dimensional structure. More recently, an offshoot of CS and SA has been a focus of research on other low-dimensional signal structures such as matrices of low rank. Low-rank matrix recovery (LRMR) is demonstrating a rapidly growing array of important applications such as quantum state tomography, triangulation from incomplete distance measurements, recommender systems (e.g., the Netflix problem), and system identification and control. In this dissertation, we examine CS, SA, and LRMR from a theoretical perspective. We consider a variety of different measurement and signal models, both random and deterministic, and mainly ask two questions. How many measurements are necessary? How large is the recovery error? We give theoretical lower bounds for both of these questions, including oracle and minimax lower bounds for the error. However, the main emphasis of the thesis is to demonstrate the efficacy of convex optimization---in particular l1 and nuclear-norm minimization based programs---in CS, SA, and LRMR. We derive upper bounds for the number of measurements required and the error derived by convex optimization, which in many cases match the lower bounds up to constant or logarithmic factors. The majority of these results do not require the restricted isometry property (RIP), a ubiquitous condition in the literature.) <|cite_end|> <|cite_start|> (Reference: Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements: This paper presents several novel theoretical results regarding the recovery of a low-rank matrix from just a few measurements consisting of linear combinations of the matrix entries. We show that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, with high probability, the recovery error from noisy data is within a constant of three targets: (1) the minimax risk, (2) an “oracle” error that would be available if the column space of the matrix were known, and (3) a more adaptive “oracle” error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding low-rank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP).) <|cite_end|>. Our results demonstrate that as the intensity of the signal increases, the upper bound on the normalized error decays at certain rate depending how well the matrix can be approximated by a low-rank matrix.
The matrix completion problem from partial observations is formulated as a maximum likelihood problem with proper constraints on the matrix $M$ (nuclear norm bound $\|M\|_* \leq \alpha\sqrt{r d_1 d_2}$ for some constant $\alpha$ and bounded entries $\beta \leq M_{ij} \leq\alpha$)\footnote{Note that the formulation differs from the one-bit matrix completion case in that we also require a lower bound on each entry of the matrix. This is consistent with an intuition that the value of each entry can be viewed as the signal-to-noise ratio (SNR) for a Poisson observation, and hence this essentially poses a requirement for the minimum SNR.}.
We also establish upper and lower bounds on the recovery error, by adapting the arguments used for one-bit matrix completion <|cite_start|> (Reference: 1-Bit Matrix Completion: In this paper we develop a theory of matrix completion for the extreme case of noisy 1-bit observations. Instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1-bit measurements, then we can eliminate the requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower bounds showing that these estimates are near-optimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1-bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better.) <|cite_end|>. The upper and lower bounds nearly match up to a factor on the order of $\mathcal{O}(\log(d_1 d_2))$, which shows that the convex relaxation formulation for Poisson matrix completion is nearly optimal. We conjecture that such a gap is inherent to the Poisson problem in the sense that \yx{it may not be an artifact due to our proof techniques.}
Moreover, we also highlight a few important distinctions of Poisson matrix completion compared to the prior work on matrix completion in the absence of noise and with Gaussian noise: (1) Although our arguments are adapted from one-bit matrix completion (where the upper and lower bounds nearly match), in the Poisson case there will be a gap between the upper and lower bounds, possibly due to the fact that Poisson distribution is only locally sub-Gaussian. In our proof, we notice that the arguments based on bounding all moments of the observations, which usually generate tight bounds for prior results with sub-Gaussian observations, do not generate tight bounds here; (2) We will need a lower bound on each matrix entry in the maximum likelihood formulation, which can be viewed as a requirement for the lowest signal-to-noise ratio (since the signal-to-noise ratio (SNR) of a Poisson observation with intensity $I$ is $\sqrt{I}$).
We also present a set of efficient algorithms, which can be used for both matrix recovery based on compressive measurements or based on partial observations. These algorithms include two generic (gradient decent based) algorithms: the proximal and accelerated proximal gradient descent methods, and an algorithm tailored to Poisson problems called the Penalized Maximum Likelihood Singular Value Threshold (PMLSVT) method. PMLSVT is derived by expanding the likelihood function locally in each iteration, and finding an exact solution to the local approximation problem which results in a simple singular value thresholding procedure <|cite_start|> (Reference: A Singular Value Thresholding Algorithm for Matrix Completion: This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.) <|cite_end|>. The performance of the two generic algorithms are analyzed theoretically. PMLSVT is related to <|cite_start|> (Reference: An accelerated gradient method for trace norm minimization: We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O(1/√k), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O(1/k). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O(1/k2) for smooth problems. Experiments on multi-task learning problems demonstrate the efficiency of the proposed algorithms.) <|cite_end|> <|cite_start|> (Reference: Structured regularizers for high-dimensional problems: Statistical and computational issues: Regularization is a widely used technique throughout statistics, machine learning, and applied mathematics. Modern applications in science and engineering lead to massive and complex data sets, which motivate the use of more structured types of regularizers. This survey provides an overview of the use of structured regularization in high-dimensional statistics, including regularizers for group-structured and hierarchical sparsity, low-rank matrices, additive and multiplicative matrix decomposition, and high-dimensional nonparametric models. It includes various examples with motivating applications; it also covers key aspects of statistical theory and provides some discussion of efficient algorithms. 233 A nn ua l R ev ie w o f St at is tic s an d It s A pp lic at io n 20 14 .1 :2 33 -2 53 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by $ {i nd iv id ua lU se r. di sp la yN am e} o n 01 /0 9/ 14 . F or p er so na l u se o nl y. ST01CH11-Wainwright ARI 29 November 2013 14:44) <|cite_end|> <|cite_start|> (Reference: Fast global convergence rates of gradient methods for high-dimensional
statistical recovery: Many statistical M-estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizes We analyze the convergence rates of first-order gradient methods for solving such problems within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions— namely, strong convexity and smoothness conditions—that underlie classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that Nesterov's first-order method [12] has a globally geometric rate of convergence up to the statistical precision of the model, meaning the typical Euclidean distance between the true unknown parameter θ* and the optimal solution ^θ. This globally linear rate is substantially faster than previous analyses of global convergence for specific methods that yielded only sublinear rates. Our analysis applies to a wide range of M-estimators and statistical models, including sparse linear regression using Lasso (l1-regularized regression), group Lasso, block sparsity, and low-rank matrix recovery using nuclear norm regularization. Overall, this result reveals an interesting connection between statistical precision and computational efficiency in high-dimensional estimation.) <|cite_end|> and can be viewed as a special case where a simple closed form solution for the algorithm exists. Good performance of the PMLSVT is demonstrated with synthetic and real data including solar flare images and bike sharing count data. We show that the PMLSVT method has much lower complexity than solving the problem directly via semidefinite program and it has fairly good accuracy.
While working on this paper we realize a parallel work <|cite_start|> (Reference: Low Rank Matrix Completion with Exponential Family Noise: The matrix completion problem consists in reconstructing a matrix from a sample of entries, possibly observed with noise. A popular class of estimator, known as nuclear norm penalized estimators, are based on minimizing the sum of a data fitting term and a nuclear norm penalization. Here, we investigate the case where the noise distribution belongs to the exponential family and is sub-exponential. Our framework alllows for a general sampling scheme. We first consider an estimator defined as the minimizer of the sum of a log-likelihood term and a nuclear norm penalization and prove an upper bound on the Frobenius prediction risk. The rate obtained improves on previous works on matrix completion for exponential family. When the sampling distribution is known, we propose another estimator and prove an oracle inequality w.r.t. the Kullback-Leibler prediction risk, which translates immediatly into an upper bound on the Frobenius prediction risk. Finally, we show that all the rates obtained are minimax optimal up to a logarithmic factor.) <|cite_end|> which also studies performance bounds for low rank matrix completion with exponential family noise and using a different approach for proof (Poisson noise is a special case of theirs). Their upper bound for the mean square error (MSE) is on the order of $\mathcal{O}\left(\log(d_1 + d_2) r\max\{d_1, d_2\}/m\right)$ (our upper bound is $\mathcal{O}\left(\log(d_1 d_2)[r(d_1+d_2)/m]^{1/2}\right)$), and their lower bound is on the order of $\mathcal{O}\left(r\max\{d_1, d_2\}/m\right)$ (versus our lower bound is $\mathcal{O}\left([r(d_1+d_2)/m]^{1/2}\right)$. There might be two reasons for the difference. First, our sampling model (consistent with one bit matrix completion in <|cite_start|> (Reference: 1-Bit Matrix Completion: In this paper we develop a theory of matrix completion for the extreme case of noisy 1-bit observations. Instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1-bit measurements, then we can eliminate the requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower bounds showing that these estimates are near-optimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1-bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better.) <|cite_end|>) assumes {\it sampling without replacement};
therefore are at most $d_1 d_2$ observations, and each entry may be observed at most once. In contrast, <|cite_start|> (Reference: Low Rank Matrix Completion with Exponential Family Noise: The matrix completion problem consists in reconstructing a matrix from a sample of entries, possibly observed with noise. A popular class of estimator, known as nuclear norm penalized estimators, are based on minimizing the sum of a data fitting term and a nuclear norm penalization. Here, we investigate the case where the noise distribution belongs to the exponential family and is sub-exponential. Our framework alllows for a general sampling scheme. We first consider an estimator defined as the minimizer of the sum of a log-likelihood term and a nuclear norm penalization and prove an upper bound on the Frobenius prediction risk. The rate obtained improves on previous works on matrix completion for exponential family. When the sampling distribution is known, we propose another estimator and prove an oracle inequality w.r.t. the Kullback-Leibler prediction risk, which translates immediatly into an upper bound on the Frobenius prediction risk. Finally, we show that all the rates obtained are minimax optimal up to a logarithmic factor.) <|cite_end|> assumes {\it sampling with replacement}; therefore there can be multiple observations for the same entry.
Since our result heavily depends on the sampling model, we suspect this may be a main reason for the difference. Another possible reason could be due to different formulations. The formulation for matrix completion in our paper is a constrained optimization with an exact {\it upper bound on the matrix nuclear norm}, whereas <|cite_start|> (Reference: Low Rank Matrix Completion with Exponential Family Noise: The matrix completion problem consists in reconstructing a matrix from a sample of entries, possibly observed with noise. A popular class of estimator, known as nuclear norm penalized estimators, are based on minimizing the sum of a data fitting term and a nuclear norm penalization. Here, we investigate the case where the noise distribution belongs to the exponential family and is sub-exponential. Our framework alllows for a general sampling scheme. We first consider an estimator defined as the minimizer of the sum of a log-likelihood term and a nuclear norm penalization and prove an upper bound on the Frobenius prediction risk. The rate obtained improves on previous works on matrix completion for exponential family. When the sampling distribution is known, we propose another estimator and prove an oracle inequality w.r.t. the Kullback-Leibler prediction risk, which translates immediatly into an upper bound on the Frobenius prediction risk. Finally, we show that all the rates obtained are minimax optimal up to a logarithmic factor.) <|cite_end|> uses a regularized optimization with a regularization parameter $\lambda$ (which is indirectly related to the nuclear norm of the solution), but there is no direct control of the matrix nuclear norm. Also, note that their upper and lower bounds also has a gap on the order of $\log (d_1 + d_2)$, which is consistent with our result.
On the other hand, compared with the more general framework for $M$-estimator <|cite_start|> (Reference: Structured regularizers for high-dimensional problems: Statistical and computational issues: Regularization is a widely used technique throughout statistics, machine learning, and applied mathematics. Modern applications in science and engineering lead to massive and complex data sets, which motivate the use of more structured types of regularizers. This survey provides an overview of the use of structured regularization in high-dimensional statistics, including regularizers for group-structured and hierarchical sparsity, low-rank matrices, additive and multiplicative matrix decomposition, and high-dimensional nonparametric models. It includes various examples with motivating applications; it also covers key aspects of statistical theory and provides some discussion of efficient algorithms. 233 A nn ua l R ev ie w o f St at is tic s an d It s A pp lic at io n 20 14 .1 :2 33 -2 53 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by $ {i nd iv id ua lU se r. di sp la yN am e} o n 01 /0 9/ 14 . F or p er so na l u se o nl y. ST01CH11-Wainwright ARI 29 November 2013 14:44) <|cite_end|>, our results are specific to the Poisson case, which may possibly be stronger but do not apply generally.
The rest of the paper is organized as follows. Section \ref{sec:model} sets up the formalism for Poisson matrix completion. Section \ref{sec:method_bound} presents matrix recovery based on constrained maximum likelihood and establishes the upper and lower bounds for the recovery accuracy. Section \ref{sec:algorithm} presents the PMLSVT algorithm that solves the maximum likelihood approximately and demonstrates its performance on recovering solar flare images and bike sharing count data.
All proofs are delegated to the appendix.
The notation in this paper is standard. In particular, $\mathbb{R}_+$ denotes the set of positive real numbers and $\mathbb{Z}_+^m$ denotes a $m$-dimensional vector with positive integer entries; $\llbracket d \rrbracket =\{1,2,\ldots,d\}$; $(x)^+ = \max\{x,0\}$ for any scalar $x$; Let $[x]_j$ denote the $j$th element of a vector $x$; $\mathbb{I}\{[\varepsilon]\}$ is the indicator function for an event $\varepsilon$; $|A|$ denotes the number of elements in a set $A$; $\mbox{diag}\{x\}$ denotes a diagonal matrix with entries of a vector $x$ being its diagonal entries; $\textbf{1}_{d_1 \times d_2}$ denotes an $d_1$-by-$d_2$ matrix of all ones. Let $\|x\|_1$, $\|x\|_2$ denote the $\ell_1$ and $\ell_2$ norms of a vector $x$.
Let entries of a matrix $X$ be denoted by $X_{ij}$ or $[X]_{ij}$. For a matrix $X = [x_1, \ldots, x_n]$ with $x_j$ being the $j$th column, $\myvec(X) = [x_1\transpose, \ldots, x_n\transpose]\transpose$ denote vectorized matrix.
Let $\|X\|$ be the spectral norm which is the largest absolute singular value, $\|X\|_{F} = (\sum_{i,j} X_{ij}^2)^{1/2}$ be the Frobenius norm, $\|X\|_*$ be the nuclear norm which is the sum of the singular values, $\|X\|_{1, 1} = \sum_{i}\sum_j |X_{ij}|$ be the $\ell_{1}$ norm, and finally $\|X\|_{\infty}$ = $\max_{ij}|X_{ij}|$ be the infinity norm of the matrix. Let $\mbox{rank}(X)$ denote the rank of a matrix $X$.
We say that a random variable $Z$ follows the Poisson distribution with a parameter $\lambda$ (or $Z \sim \mbox{Poisson}(\lambda))$ if its probability mass function $\mathbb{P}(Z=k) = e^{-\lambda}\lambda^k/(k!)$). Finally, let $\mathbb{E}[Z]$ denote the expectation of a random variable $Z$.
The only set of non-conventional notation that we use is the following. By a slight abuse of notation, we denote the Kullback-Leibler (KL) divergence between two Poisson distributions with parameters $p$ and $q$, $p,q \in \mathbb{R}_+$ as
\[
D(p\|q) \triangleq p\log(p/q) - (p-q),
\]
and denote the Hellinger distance between two Poisson distributions with parameters $p$ and $q$ as
\[
d_H^2(p, q) \triangleq 2-2\exp\left\{-\frac{1}{2}\left(\sqrt{p}-\sqrt{q}\right)^2\right\}.
\]
It should be understood that the KL distance and the Hellinger distance are defined between two distributions and here the arguments $p$ and $q$ are merely parameters of the Poisson distributions since we restrict our attention to Poisson.
Based on this, we also denote, by a slight abuse of notation, the average KL and Hellinger distances for two sets of Poisson distributions whose parameters are determined by entries of two matrices $P$, $Q \in \mathbb{R}_+^{d_1 \times d_2}$:
$$
D(P\|Q) \triangleq \frac{1}{d_1 d_2}\sum_{i,j}D(P_{ij}\|Q_{ij}),
$$
$$
d_H^2(P,Q) \triangleq \frac{1}{d_1 d_2}\sum_{i,j}d_H^2(P_{ij},Q_{ij}).
$$ <|paper_end|> | [
"<|reference_start|> A Simpler Approach to Matrix Completion: This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <|reference_end|>",
"<|reference_start|> Compressed sensing, sparse approximation, and low-rank matrix estimation: The importance of sparse signal structures has been recognized in a plethora of applications ranging from medical imaging to group disease testing to radar technology. It has been shown in practice that various signals of interest may be (approximately) sparsely modeled, and that sparse modeling is often beneficial, or even indispensable to signal recovery. Alongside an increase in applications, a rich theory of sparse and compressible signal recovery has recently been developed under the names compressed sensing (CS) and sparse approximation (SA). This revolutionary research has demonstrated that many signals can be recovered from severely undersampled measurements by taking advantage of their inherent low-dimensional structure. More recently, an offshoot of CS and SA has been a focus of research on other low-dimensional signal structures such as matrices of low rank. Low-rank matrix recovery (LRMR) is demonstrating a rapidly growing array of important applications such as quantum state tomography, triangulation from incomplete distance measurements, recommender systems (e.g., the Netflix problem), and system identification and control. In this dissertation, we examine CS, SA, and LRMR from a theoretical perspective. We consider a variety of different measurement and signal models, both random and deterministic, and mainly ask two questions. How many measurements are necessary? How large is the recovery error? We give theoretical lower bounds for both of these questions, including oracle and minimax lower bounds for the error. However, the main emphasis of the thesis is to demonstrate the efficacy of convex optimization---in particular l1 and nuclear-norm minimization based programs---in CS, SA, and LRMR. We derive upper bounds for the number of measurements required and the error derived by convex optimization, which in many cases match the lower bounds up to constant or logarithmic factors. The majority of these results do not require the restricted isometry property (RIP), a ubiquitous condition in the literature. <|reference_end|>",
"<|reference_start|> An accelerated gradient method for trace norm minimization: We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O(1/√k), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O(1/k). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O(1/k2) for smooth problems. Experiments on multi-task learning problems demonstrate the efficiency of the proposed algorithms. <|reference_end|>",
"<|reference_start|> Structured regularizers for high-dimensional problems: Statistical and computational issues: Regularization is a widely used technique throughout statistics, machine learning, and applied mathematics. Modern applications in science and engineering lead to massive and complex data sets, which motivate the use of more structured types of regularizers. This survey provides an overview of the use of structured regularization in high-dimensional statistics, including regularizers for group-structured and hierarchical sparsity, low-rank matrices, additive and multiplicative matrix decomposition, and high-dimensional nonparametric models. It includes various examples with motivating applications; it also covers key aspects of statistical theory and provides some discussion of efficient algorithms. 233 A nn ua l R ev ie w o f St at is tic s an d It s A pp lic at io n 20 14 .1 :2 33 -2 53 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by $ {i nd iv id ua lU se r. di sp la yN am e} o n 01 /0 9/ 14 . F or p er so na l u se o nl y. ST01CH11-Wainwright ARI 29 November 2013 14:44 <|reference_end|>"
] | [
9,
28,
32,
33
] | {"<|cite_1|>": "ss-1719809", "<|cite_2|>": "ss-1155461", "<|cite_3|>": "ss-1270103", "<|multi_cite_4_1|>": "ss-761778", "<|multi_cite_4_2|>": "ss-808398", "<|multi_cite_5_1|>": "arxiv-3881", "<|multi_cite_5_2|>": "arxiv-6122", "<|multi_cite_5_3|>": "arxiv-6669", "<|multi_cite_5_5|>": "ss-1071734", "<|multi_cite_5_6|>": "arxiv-9387", "<|multi_cite_5_7|>": "ss-1522476", "<|multi_cite_5_8|>": "ss-1422124", "<|multi_cite_5_9|>": "ss-736149", "<|multi_cite_6_1|>": "arxiv-7788", "<|multi_cite_6_2|>": "arxiv-6795", "<|multi_cite_6_3|>": "ss-1149010", "<|multi_cite_6_4|>": "arxiv-15937", "<|multi_cite_6_5|>": "ss-1211167", "<|multi_cite_6_6|>": "ss-1719811", "<|cite_7|>": "arxiv-36161", "<|multi_cite_8_1|>": "arxiv-68205", "<|multi_cite_8_2|>": "ss-1434497", "<|multi_cite_9_1|>": "arxiv-9736", "<|multi_cite_9_2|>": "arxiv-14906", "<|multi_cite_9_3|>": "ss-1074853", "<|multi_cite_10_1|>": "ss-1719812", "<|multi_cite_10_2|>": "arxiv-49161", "<|cite_11|>": "arxiv-9736", "<|multi_cite_12_1|>": "ss-2527715", "<|multi_cite_12_2|>": "ss-979499", "<|cite_13|>": "arxiv-36161", "<|cite_14|>": "ss-1522476", "<|multi_cite_15_1|>": "ss-2292195", "<|multi_cite_15_2|>": "ss-1719813", "<|multi_cite_15_3|>": "ss-772572", "<|cite_16|>": "ss-1382741", "<|cite_17|>": "arxiv-36161", "<|cite_18|>": "ss-1382741", "<|cite_19|>": "ss-1382741", "<|cite_20|>": "ss-1719813"} |
1904.11074 | <|paper_start|> Title: An Attentional Neural Network Architecture for Folk Song Classification
Abstract: An Attentional Neural Network Architecture for Folk Song Classification: In this paper we present an attentional neural network for folk song classification. We introduce the concept of musical motif embedding, and show how using melodic local context we are able to model monophonic folk song motifs using the skipgram version of the word2vec algorithm. We use the motif embeddings to represent folk songs from Germany, China, and Sweden, and classify them using an attentional neural network that is able to discern relevant motifs in a song. The results show how the network obtains state of the art accuracy in a completely unsupervised manner, and how motif embeddings produce high quality motif representations from folk songs. We conjecture on the advantages of this type of representation in large symbolic music corpora, and how it can be helpful in the musicological analysis of folk song collections from different cultures and geographical areas.
Introduction
\label{sec:introduction}
The increasing availability of digital music corpora and the growing interest in empirical approaches and methods in musicology has brought new challenges and opportunities for Musical Information Retrieval (MIR). Large symbolic cross-cultural music corpora demand new tools that can extract relevant information in an automated manner. In this paper we are interested in researching the possibilities of using vector representations of musical patterns based on their context. Having a vector representation of a musical entity such as a motif, will allow for the direct comparison of patterns and contexts using the cosine similarity measure. This approach pretends to facilitate the musicological analysis by using machine learning vector embedding techniques to extract similar patterns and their contexts from large collections of symbolic music databases.
Vector representations of words, or word embeddings, have had a great success in Natural Language Processing (NLP) tasks <|cite_start|> (Reference: Learning representations by back-propagating errors: ) <|cite_end|>. Based on the idea that words that are semantically similar to each other are represented closer in a continous vector space, the word2vec algorithm has shown the ability to represent high-quality word embeddings from large text corpora <|cite_start|> (Reference: Efficient Estimation of Word Representations in Vector Space: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.) <|cite_end|> <|cite_start|> (Reference: Distributed Representations of Words and Phrases and their Compositionality: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.) <|cite_end|> <|cite_start|> (Reference: Word2vec explained: Deriving Mikolov et al.'s negative-sampling word-embedding method: The word2vec software of Tomas Mikolov and colleagues (this https URL ) has gained a lot of traction lately, and provides state-of-the-art word embeddings. The learning models behind the software are described in two research papers. We found the description of the models in these papers to be somewhat cryptic and hard to follow. While the motivations and presentation may be obvious to the neural-networks language-modeling crowd, we had to struggle quite a bit to figure out the rationale behind the equations.
This note is an attempt to explain equation (4) (negative sampling) in "Distributed Representations of Words and Phrases and their Compositionality" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.) <|cite_end|>. NLP methods have been adopted and adapted in MIR contexts <|cite_start|> (Reference: {Multiple Viewpoint Systems for Music Prediction: Abstract This paper examines the prediction and generation of music using a multiple viewpoint system, a collection of independent views of the musical surface each of which models a specific type of musical phenomena. Both the general style and a particular piece are modeled using dual short‐term and long‐term theories, and the model is created using machine learning techniques on a corpus of musical examples. The models are used for analysis and prediction, and we conjecture that highly predictive theories will also generate original, acceptable, works. Although the quality of the works generated is hard to quantify objectively, the predictive power of models can be measured by the notion of entropy, or unpredictability. Highly predictive theories will produce low‐entropy estimates of a musical language. The methods developed are applied to the Bach chorale melodies. Multiple‐viewpoint systems are learned from a sample of 95 chorales, estimates of entropy are produced, and a predictive theory is used to...) <|cite_end|>, <|cite_start|> (Reference: Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription: We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription.) <|cite_end|>, <|cite_start|> (Reference: Large-Scale User Modeling with Recurrent Neural Networks for Music Discovery on Multiple Time Scales: The amount of content on online music streaming platforms is immense, and most users only access a tiny fraction of this content. Recommender systems are the application of choice to open up the collection to these users. Collaborative filtering has the disadvantage that it relies on explicit ratings, which are often unavailable, and generally disregards the temporal nature of music consumption. On the other hand, item co-occurrence algorithms, such as the recently introduced word2vec-based recommenders, are typically left without an effective user representation. In this paper, we present a new approach to model users through recurrent neural networks by sequentially processing consumed items, represented by any type of embeddings and other context features. This way we obtain semantically rich user representations, which capture a user's musical taste over time. Our experimental analysis on large-scale user data shows that our model can be used to predict future songs a user will likely listen to, both in the short and long term.) <|cite_end|>. Word2vec was used to model musical contexts in western classical music works <|cite_start|> (Reference: Modeling Musical Context with Word2vec: We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.) <|cite_end|>, and for chord recommendations <|cite_start|> (Reference: Chordripple: Recommending chords to help novice composers go beyond the ordinary: Novice composers often find it difficult to go beyond common chord progressions. To make it easier for composers to experiment with radical chord choices, we built a creativity support tool, ChordRipple, which makes chord recommendations that aim to be both diverse and appropriate to the current context. Composers can use it to help select the next chord, or to replace sequences of chords in an internally consistent manner. To make such recommendations, we adapt a neural network model from natural language processing known as Word2Vec to the music domain. This model learns chord embeddings from a corpus of chord sequences, placing chords nearby when they are used in similar contexts. The learned embeddings support creative substitutions between chords, and also exhibit topological properties that correspond to musical structure. For example, the major and minor chords are both arranged in the latent space in shapes corresponding to the circle-of-fifths. Our structured observations with 14 music students show that the tool helped them explore a wider palette of chords, and to make "big jumps in just a few chords". It gave them "new ideas of ways to move forward in the piece", not just on a chord-to-chord level but also between phrases. Our controlled studies with 9 more music students show that more adventurous chords are adopted when composing with ChordRipple.) <|cite_end|>. In this paper we deal with a more limited data context, monophonic folk songs.
Our goal is to adopt the skip-gram version of the word2vec model for the distributional representation of motifs. Several melodic features such as contour, grouping, and small size motifs seem to be part of the so called ‘Statistical Music Universals’ <|cite_start|> (Reference: An ethnomusicologist contemplates universals in musical sound and musical culture: ) <|cite_end|>, <|cite_start|> (Reference: Statistical universals reveal the structures and functions of human music: Significance Which features of music are universal and which are culture-specific? Why? These questions are important for understanding why humans make music but have rarely been scientifically tested. We used musical classification techniques and statistical tools to analyze a global set of 304 music recordings, finding no absolute universals but dozens of statistical universals. These include not only commonly cited features related to pitch and rhythm but also domains such as social context and interrelationships between musical features. We speculate that group coordination is the common aspect unifying the cross-cultural structural regularities of human music, with implications for the study of music evolution. Music has been called “the universal language of mankind.” Although contemporary theories of music evolution often invoke various musical universals, the existence of such universals has been disputed for decades and has never been empirically demonstrated. Here we combine a music-classification scheme with statistical analyses, including phylogenetic comparative methods, to examine a well-sampled global set of 304 music recordings. Our analyses reveal no absolute universals but strong support for many statistical universals that are consistent across all nine geographic regions sampled. These universals include 18 musical features that are common individually as well as a network of 10 features that are commonly associated with one another. They span not only features related to pitch and rhythm that are often cited as putative universals but also rarely cited domains including performance style and social context. These cross-cultural structural regularities of human music may relate to roles in facilitating group coordination and cohesion, as exemplified by the universal tendency to sing, play percussion instruments, and dance to simple, repetitive music in groups. Our findings highlight the need for scientists studying music evolution to expand the range of musical cultures and musical features under consideration. The statistical universals we identified represent important candidates for future investigation.) <|cite_end|>. This sequential processing of melodic units may be related to the human capacity to group and comprehend motifs as units within a melodic context. Our hypothesis is that these units may relate to each other in a melody in similar ways as words do in sentences. If that is the case, the word2vec algorithm should be able to represent motifs from folk songs. The motif embeddings will be used as the input in a classification task using an attentional neural network architecture.
Deep learning methods for text classification such as convolutional neural networks <|cite_start|> (Reference: Convolutional Neural Networks for Sentence Classification: We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.) <|cite_end|>, and recurrent neural
networks based on long short-term memory (LSTM) <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|> have proven to be very effective. Encoder-decoder methods from the Machine Translation literature, where an encoder neural network reads and encodes a sentence into a fixed-length vector, and an decoder outputs a translation of the sentence by decoding the initial representation. One of the shortcoming of this approach is the fact that sentences are encoded as a fixed-length vector, and in a corpus where sentences greatly vary in size, the performance of this method deteriorates quickly <|cite_start|> (Reference: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation: In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.) <|cite_end|>. An attentional mechanism that searches for a set of positions in an encoded sentence where the most relevant information is kept was presented to overcome this limitation <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|>. The relevant information is preserved in a context vector, so a target word based on this vector can be predicted. We use this so called 'attention' mechanism, to search for motifs that are more relevant than others in a song based on a melodic context.
The remaining of this paper is organized as follows: in section \ref{sec:w2vec} we introduce in formal terms the word2vec model, present how the data is encoded, and show based on ad-hoc queries the quality of the motif embeddings. In section \ref{sec:attention} we present the attentional neural network for classifying folk song based on the motifs obtained with the word2vec algorithm. Section \ref{sec:experiment} details the data used and the experiments, presenting the results in \ref{sec:results}. We conclude in section \ref{sec:conclusions} by highlighting the potential use of this type of representation and classification method in the analysis of large corpora from diverse cultures and geographical areas. <|paper_end|> | [
"<|reference_start|> Learning representations by back-propagating errors: <|reference_end|>",
"<|reference_start|> Modeling Musical Context with Word2vec: We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice. <|reference_end|>",
"<|reference_start|> An ethnomusicologist contemplates universals in musical sound and musical culture: <|reference_end|>",
"<|reference_start|> Convolutional Neural Networks for Sentence Classification: We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification. <|reference_end|>"
] | [
0,
7,
9,
11
] | {"<|cite_1|>": "ss-677950", "<|multi_cite_2_1|>": "arxiv-40388", "<|multi_cite_2_2|>": "arxiv-51600", "<|multi_cite_2_3|>": "ss-971220", "<|cite_3|>": "ss-1248426", "<|cite_4|>": "arxiv-33309", "<|cite_5|>": "arxiv-132524", "<|cite_6|>": "arxiv-127839", "<|cite_7|>": "ss-2331318", "<|cite_8|>": "ss-2279797", "<|cite_9|>": "ss-2331319", "<|cite_10|>": "arxiv-65210", "<|cite_11|>": "ss-710343", "<|cite_12|>": "arxiv-61763", "<|cite_13|>": "arxiv-65503"} |
2311.18801 | <|paper_start|> Title: Distributed Global Structure-from-Motion with a Deep Front-End
Abstract: Distributed Global Structure-from-Motion with a Deep Front-End: While initial approaches to Structure-from-Motion (SfM) revolved around both global and incremental methods, most recent applications rely on incremental systems to estimate camera poses due to their superior robustness. Though there has been tremendous progress in SfM `front-ends' powered by deep models learned from data, the state-of-the-art (incremental) SfM pipelines still rely on classical SIFT features, developed in 2004. In this work, we investigate whether leveraging the developments in feature extraction and matching helps global SfM perform on par with the SOTA incremental SfM approach (COLMAP). To do so, we design a modular SfM framework that allows us to easily combine developments in different stages of the SfM pipeline. Our experiments show that while developments in deep-learning based two-view correspondence estimation do translate to improvements in point density for scenes reconstructed with global SfM, none of them outperform SIFT when comparing with incremental SfM results on a range of datasets. Our SfM system is designed from the ground up to leverage distributed computation, enabling us to parallelize computation on multiple machines and scale to large scenes.
Introduction
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/teaser/20231115_154918__south-building-128__results__num_matched5__maxframelookahead10__760p__unified_loftr_view1.png}
\includegraphics[width=0.4\linewidth]{figures/teaser/20231115_154918__south-building-128__results__num_matched5__maxframelookahead10__760p__unified_loftr_view2.png}
\includegraphics[width=0.4\linewidth]{figures/south_building_128/P1180221.jpeg}
\caption{A sparse reconstruction of the UNC South Building using GTSfM with a deep LoFTR-based front-end, with an example image input. Multi-view stereo is not used.}
\label{fig:teaser-fig-south-bldg}
\end{figure}
Building accurate maps of the world is essential for spatial artificial intelligence (AI), with applications from autonomous robots to AR/VR. Structure-from-Motion (SfM) and multi-view stereo (MVS) have proven to be effective methods for creating maps with vision-only inputs. More broadly, SfM is a fundamental building block for 3d computer vision.
For certain types of scenes with simple to medium complexity, e.g. datasets with $\sim$100 object-facing images, high-fidelity world models can be easily extracted with tools such as COLMAP <|cite_start|> (Reference: Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.) <|cite_end|>.
These models and associated registered camera poses have enabled new breakthroughs in machine learning, through methods such as NeRF <|cite_start|> (Reference: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.) <|cite_end|>, its variants <|cite_start|> (Reference: Nerfies: Deformable Neural Radiance Fields: We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones. Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF. We observe that these NeRF-like deformation fields are prone to local minima, and propose a coarse-to-fine optimization method for coordinate-based models that allows for more robust optimization. By adapting principles from geometry processing and physical simulation to NeRF-like models, we propose an elastic regularization of the deformation field that further improves robustness. We show that our method can turn casually captured selfie photos/videos into deformable NeRF models that allow for photorealistic renderings of the subject from arbitrary viewpoints, which we dub "nerfies." We evaluate our method by collecting time-synchronized data using a rig with two mobile phones, yielding train/validation images of the same pose at different viewpoints. We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.) <|cite_end|>, Gaussian Splatting <|cite_start|> (Reference: {{3D: 近日,国内首家3D打印体验店落户北京,而英国互联网公司MakieLab宣布其第一款3D打印玩具Makies已经成功满足欧洲玩具安全标准,成为世界上第一个通过CE认证的3D打印玩具。由此在媒体上引起新一轮对3D打印技术的热议。可以预见,3D打印玩具将引发玩具制造业的深刻变革。) <|cite_end|>, accurate monocular depth predictions for humans <|cite_start|> (Reference: Learning the Depths of Moving People by Watching Frozen People: We present a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving. Existing methods for recovering depth for dynamic, non-rigid objects from monocular video impose strong assumptions on the objects' motion and may only recover sparse depth. In this paper, we take a data-driven approach and learn human depth priors from a new source of data: thousands of Internet videos of people imitating mannequins, i.e., freezing in diverse, natural poses, while a hand-held camera tours the scene. Because people are stationary, training data can be generated using multi-view stereo reconstruction. At inference time, our method uses motion parallax cues from the static areas of the scenes to guide the depth prediction. We demonstrate our method on real-world sequences of complex human actions captured by a moving hand-held camera, show improvement over state-of-the-art monocular depth prediction methods, and show various 3D effects produced using our predicted depth.) <|cite_end|>, and more.
Incremental SfM is the dominant paradigm, as global SfM suffers from a lack of accuracy, largely due to difficulty in reasoning about outliers globally in a single pass.
However, to our knowledge, almost all global SfM systems today use classical frontends, reliant on feature matching with handcrafted descriptors, and the past decade has seen a flurry of work towards a \textit{deep front-end} for SfM.
In this work, we analyze whether leveraging deep front-ends leads to an improvement in global SfM over classical front-ends.
In the modern AI era, computation on clusters with 1000's of GPUs or TPUs has become common <|cite_start|> (Reference: Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM: Large language models have led to state-of-the-art accuracies across a range of tasks. However, training these models efficiently is challenging for two reasons: a) GPU memory capacity is limited, making it impossible to fit large models on even a multi-GPU server, and b) the number of compute operations required to train these models can result in unrealistically long training times. Consequently, new methods of model parallelism such as tensor and pipeline parallelism have been proposed. Unfortunately, naive usage of these methods leads to fundamental scaling issues at thousands of GPUs, e.g., due to expensive cross-node communication or devices spending significant time waiting on other devices to make progress. In this paper, we show how different types of parallelism methods (tensor, pipeline, and data parallelism) can be composed to scale to thousands of GPUs and models with trillions of parameters. We survey techniques for pipeline parallelism and propose a novel interleaved pipeline parallelism schedule that can improve throughput by 10+% with memory footprint comparable to existing approaches. We quantitatively study the trade-offs between tensor, pipeline, and data parallelism, and provide intuition as to how to configure distributed training of a large model. Our approach allows us to perform training iterations on a model with 1 trillion parameters at 502 petaFLOP/s on 3072 GPUs with achieved per-GPU throughput of 52% of theoretical peak. Our code is open sourced at https://github.com/nvidia/megatron-lm.) <|cite_end|> <|cite_start|> (Reference: Flamingo: a Visual Language Model for Few-Shot Learning: Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.) <|cite_end|> <|cite_start|> (Reference: PaLI: A Jointly-Scaled Multilingual Language-Image Model: Effective scaling and a flexible task interface enable large language models to excel at many tasks. We present PaLI (Pathways Language and Image model), a model that extends this approach to the joint modeling of language and vision. PaLI generates text based on visual and textual inputs, and with this interface performs many vision, language, and multimodal tasks, in many languages. To train PaLI, we make use of large pre-trained encoder-decoder language models and Vision Transformers (ViTs). This allows us to capitalize on their existing capabilities and leverage the substantial cost of training them. We find that joint scaling of the vision and language components is important. Since existing Transformers for language are much larger than their vision counterparts, we train a large, 4-billion parameter ViT (ViT-e) to quantify the benefits from even larger-capacity vision models. To train PaLI, we create a large multilingual mix of pretraining tasks, based on a new image-text training set containing 10B images and texts in over 100 languages. PaLI achieves state-of-the-art in multiple vision and language tasks (such as captioning, visual question-answering, scene-text understanding), while retaining a simple, modular, and scalable design.) <|cite_end|> <|cite_start|> (Reference: PaLM: Scaling Language Modeling with Pathways: Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.) <|cite_end|> <|cite_start|> (Reference: LLaMA: Open and Efficient Foundation Language Models: We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.) <|cite_end|> <|cite_start|> (Reference: Reproducible scaling laws for contrastive language-image learning: Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip) <|cite_end|>, yet existing open-source SfM systems are not designed for and do not support multi-node distributed computation.
Moreover, state-of-the-art SfM techniques are incremental, which makes them slow on very large datasets (e.g. with more than 500 images).
Incremental SfM begins by finding a good first image pair, then triangulating 3D points from two-views, then adding one additional image pair at a time, registering it to the 3d points, then performing bundle adjustment, removing outliers, and continuing until all possible image pairs have been registered.
This is certainly not the only possible approach; \emph{global} SfM methods have also been explored for some time <|cite_start|> (Reference: Combining Two-view Constraints for Motion Estimation: In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of N images, the global motion can be described by N-1 independent motion models. On the other hand, in a sequence there exist as many as /sub 2///sup N(N-1)/ pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fining a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (i.e. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.) <|cite_end|> <|cite_start|> (Reference: Lie-Algebraic Averaging for Globally Consistent Motion Estimation: While motion estimation has been extensively studied in the computer vision literature, the inherent information redundancy in an image sequence has not been well utilised. In particular as many as N(N-1)/2 pairwise relative motions can be estimated efficiently from a sequence of N images. This highly redundant set of observations can be efficiently averaged resulting in fast motion estimation algorithms that are globally consistent. In this paper we demonstrate this using the underlying Lie-group structure of motion representations. The Lie-algebras of the Special Orthogonal and Special Euclidean groups are used to define averages on the Lie-group which in turn gives statistically meaningful, efficient and accurate algorithms for fusing motion information. Using multiple constraints also controls the drift in the solution due to accumulating error. The performance of the method in estimating camera motion is demonstrated on image sequences.) <|cite_end|> <|cite_start|> (Reference: Robustness in Motion Averaging: ) <|cite_end|> <|cite_start|> (Reference: The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011: ) <|cite_end|> <|cite_start|> (Reference: 2011 IEEE International Conference on Computer Vision workshops (ICCV workshops 2011): ) <|cite_end|> <|cite_start|> (Reference: Global fusion of relative motions for robust, accurate and scalable structure from motion: Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties.) <|cite_end|> <|cite_start|> (Reference: Optimizing the viewing graph for structure-from-motion: The viewing graph represents a set of views that are related by pairwise relative geometries. In the context of Structure-from-Motion (SfM), the viewing graph is the input to the incremental or global estimation pipeline. Much effort has been put towards developing robust algorithms to overcome potentially inaccurate relative geometries in the viewing graph during SfM. In this paper, we take a fundamentally different approach to SfM and instead focus on improving the quality of the viewing graph before applying SfM. Our main contribution is a novel optimization that improves the quality of the relative geometries in the viewing graph by enforcing loop consistency constraints with the epipolar point transfer. We show that this optimization greatly improves the accuracy of relative poses in the viewing graph and removes the need for filtering steps or robust algorithms typically used in global SfM methods. In addition, the optimized viewing graph can be used to efficiently calibrate cameras at scale. We combine our viewing graph optimization and focal length calibration into a global SfM pipeline that is more efficient than existing approaches. To our knowledge, ours is the first global SfM pipeline capable of handling uncalibrated image sets.) <|cite_end|>.
They avoid the need to do incremental pose estimation and refinement, but are known to suffer from poor accuracy.
Why is global SfM not sufficiently accurate?
One way to think about the SfM problem is to divide SfM into a front-end that generates image correspondences, and a back-end (optimization).
Without noise from `front-end' measurements, we find global SfM is close to exact. However, a single false positive can degrade performance.
A key problem is that reasoning about outliers is challenging. Techniques from sequential methods, such as filtering out measurements inconsistent with the current model at each step, are not directly applicable in a global setting.
It is harder to reason independently about which measurements are unreliable and, therefore, the most challenging aspect of SfM is correspondence, and when to trust correspondences, and of all the places where deep learning can be injected into the geometric modeling of SfM, feature matching is the most apparent part.
In this work, we aim to investigate whether injecting deep learning into the SfM front-end can rectify these accuracy shortcomings.
\noindent Our contributions are as follows:
\begin{itemize}
\item we provide an open-source \textit{global} SfM framework that is natively parallelizable and distributable on clusters, available as a Python package with no compilation required;
\item we are among the first to analyze different deep front-ends in the context of global SfM;
\item we demonstrate significant runtime decreases with respect to a state-of-the-art \textit{incremental} SfM pipeline.
\end{itemize}
Related Work
\subsection{Classical and Deep Front-Ends for SfM}
Traditional SfM systems compute keypoints, descriptors, matches, and verify correspondences <|cite_start|> (Reference: Photo tourism: exploring photo collections in 3D: We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.) <|cite_end|> <|cite_start|> (Reference: Modeling the World from Internet Photo Collections: ) <|cite_end|> <|cite_start|> (Reference: Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.) <|cite_end|>. Surprisingly, little of 20 years of research towards using machine learning for the SfM front-end has been incorporated upstream into open-source libraries today, from COLMAP <|cite_start|> (Reference: Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.) <|cite_end|>, to OpenSfM <|cite_start|> (Reference: OpenMVG: Open Multiple View Geometry: ) <|cite_end|> to OpenMVG <|cite_start|> (Reference: OpenMVG: Open Multiple View Geometry: ) <|cite_end|> to Theia <|cite_start|> (Reference: Theia: A Fast and Scalable Structure-from-Motion Library: In this paper, we have presented a comprehensive multi-view geometry library, Theia, that focuses on large-scale SfM. In addition to state-of-the-art scalable SfM pipelines, the library provides numerous tools that are useful for students, researchers, and industry experts in the field of multi-view geometry. Theia contains clean code that is well documented (with code comments and the website) and easy to extend. The modular design allows for users to easily implement and experiment with new algorithms within our current pipeline without having to implement a full end-to-end SfM pipeline themselves. Theia has already gathered a large number of diverse users from universities, startups, and industry and we hope to continue to gather users and active contributors from the open-source community.) <|cite_end|> (although some extensions are available <|cite_start|> (Reference: Pixel-Perfect Structure-from-Motion with Featuremetric Refinement: Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this paper, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale. Our code is publicly available at https://github.com/cvg/pixel-perfect-sfm as an add-on to the popular SfM software COLMAP.) <|cite_end|>). Early SfM systems were created before deep learning began to show promise in this domain, thus all components are hand-crafted. Furthermore, end-to-end methods for SfM aren’t accurate enough. Accordingly, we utilize local features and well-modeled geometry in the back-end. The literature on deep learning for correspondence estimation is vast; we refer the reader to surveys <|cite_start|> (Reference: Image Matching across Wide Baselines: From Paper to Practice: We introduce a comprehensive benchmark for local features and robust estimation algorithms, focusing on the downstream task -- the accuracy of the reconstructed camera pose -- as our primary metric. Our pipeline's modular structure allows easy integration, configuration, and combination of different methods and heuristics. This is demonstrated by embedding dozens of popular algorithms and evaluating them, from seminal works to the cutting edge of machine learning research. We show that with proper settings, classical solutions may still outperform the perceived state of the art. Besides establishing the actual state of the art, the conducted experiments reveal unexpected properties of Structure from Motion (SfM) pipelines that can help improve their performance, for both algorithmic and learned methods. Data and code are online https://github.com/vcg-uvic/image-matching-benchmark, providing an easy-to-use and flexible framework for the benchmarking of local features and robust estimation methods, both alongside and against top-performing methods. This work provides a basis for the Image Matching Challenge https://vision.uvic.ca/image-matching-challenge.) <|cite_end|>.
\subsection{Incremental SfM}
Incremental SfM traditionally uses point correspondences to iteratively establish camera poses and global structure. Pollefeys \emph{et al.} <|cite_start|> (Reference: Visual Modeling with a Hand-Held Camera: ) <|cite_end|> introduced some of the modern framework for incremental SfM, which was expanded to massive datasets in Bundler <|cite_start|> (Reference: Modeling the World from Internet Photo Collections: ) <|cite_end|>, VisualSfM <|cite_start|> (Reference: International Conference on 3D Vision, 3DV 2021, London, United Kingdom, December 1-3, 2021: ) <|cite_end|>, and COLMAP <|cite_start|> (Reference: Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.) <|cite_end|>. The Tanks and Temples benchmark indicates that COLMAP represents the state-of-the-art over both incremental and global SfM, but COLMAP can be slow in practice. COLMAP has been extended in many ways, such as the use of feature volumes to refine track measurements <|cite_start|> (Reference: Pixel-Perfect Structure-from-Motion with Featuremetric Refinement: Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this paper, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale. Our code is publicly available at https://github.com/cvg/pixel-perfect-sfm as an add-on to the popular SfM software COLMAP.) <|cite_end|>.
\begin{table}[]
\caption{A selection of SfM systems in the literature with open-source code and the capabilities they natively provide. Abbreviations: `Incr.' (Incremental), `Glob.' (Global), `Distr.' (Distributed).}
\vspace{-3mm}
\centering
\begin{adjustbox}{width=\linewidth}
\begingroup
\begin{tabular}{l cccccc}
\toprule
\rowcolorize \textsc{\textbf{SfM System}} & \textsc{\textbf{Incr.}} & \textsc{\textbf{Glob.}} & \textsc{\textbf{Classic.}} & \textsc{\textbf{Deep}} & \textsc{\textbf{Multi-}} & \textsc{\textbf{Multi-}} \\
\rowcolorize & & & \textsc{\textbf{Front}} & \textsc{\textbf{Front}} & \textsc{\textbf{Worker}} & \textsc{\textbf{Machine}} \\
\rowcolorize & & & \textsc{\textbf{End}} & \textsc{\textbf{End}} & \textsc{\textbf{(Parallel)}} & \textsc{\textbf{(Distr.)}} \\
\midrule
\textsc{Bundler} <|cite_start|> (Reference: Photo tourism: exploring photo collections in 3D: We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.) <|cite_end|> <|cite_start|> (Reference: Modeling the World from Internet Photo Collections: ) <|cite_end|> & \checkmark & & \checkmark & & \checkmark & \\
\rowcolorize \textsc{Theia} <|cite_start|> (Reference: Theia: A Fast and Scalable Structure-from-Motion Library: In this paper, we have presented a comprehensive multi-view geometry library, Theia, that focuses on large-scale SfM. In addition to state-of-the-art scalable SfM pipelines, the library provides numerous tools that are useful for students, researchers, and industry experts in the field of multi-view geometry. Theia contains clean code that is well documented (with code comments and the website) and easy to extend. The modular design allows for users to easily implement and experiment with new algorithms within our current pipeline without having to implement a full end-to-end SfM pipeline themselves. Theia has already gathered a large number of diverse users from universities, startups, and industry and we hope to continue to gather users and active contributors from the open-source community.) <|cite_end|> & & \checkmark & \checkmark & & \checkmark & \\
\textsc{OpenMVG} <|cite_start|> (Reference: OpenMVG: Open Multiple View Geometry: ) <|cite_end|> & \checkmark & \checkmark & \checkmark & & & \\
\rowcolorize \textsc{COLMAP} <|cite_start|> (Reference: Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.) <|cite_end|> & \checkmark & & & & \checkmark & \\
\textsc{OpenSfM} & \checkmark & & \checkmark & & & \\
\rowcolorize \textsc{DagSfM} & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark \\
\textsc{Pix-Perf. SfM} <|cite_start|> (Reference: Pixel-Perfect Structure-from-Motion with Featuremetric Refinement: Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this paper, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale. Our code is publicly available at https://github.com/cvg/pixel-perfect-sfm as an add-on to the popular SfM software COLMAP.) <|cite_end|> & \checkmark & & & \checkmark & \checkmark & \\
\rowcolorize \textsc{CReTA} <|cite_start|> (Reference: Correspondence Reweighted Translation Averaging: ) <|cite_end|> & & \checkmark & \checkmark & & & \\
\textsc{Zhang et al.} <|cite_start|> (Reference: Revisiting Rotation Averaging: Uncertainties and Robust Losses: In this paper, we revisit the rotation averaging problem applied in global Structure-from-Motion pipelines. We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar geometries.We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging. Such uncertainties are obtained for free by considering the Jacobians of two-view refinements. Moreover, we explore integrating a variant of the MAGSAC loss into the rotation averaging problem, instead of using classical robust losses employed in current frameworks. The proposed method leads to results superior to baselines, in terms of accuracy, on large-scale public benchmarks. The code is public. https://github.com/zhangganlin/GlobalSfMpy) <|cite_end|> & & \checkmark & \checkmark & \checkmark & & \\
\rowcolorize \textsc{GTSfM (Ours)} & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\bottomrule
\end{tabular}
\endgroup
\end{adjustbox}
\vspace{-5mm}
\end{table}
\subsection{Global SfM}
In Global SfM, also known as \emph{non-sequential} SfM or \emph{batch} SfM, one matches all possible image pairs, obtains a large number of two-view pose constraints, synchronizes all of these binary rotation measurements with some form of least squares, then estimates the camera positions, triangulates 3D points, and performs a single global bundle adjustment to refine points and poses. Both incremental and global SfM are subject to a feature matching stage with $O(n^2)$ complexity for $n$ images. Global SfM is not new -- Govindu introduced formulations for it two decades ago <|cite_start|> (Reference: Combining Two-view Constraints for Motion Estimation: In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of N images, the global motion can be described by N-1 independent motion models. On the other hand, in a sequence there exist as many as /sub 2///sup N(N-1)/ pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fining a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (i.e. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.) <|cite_end|> <|cite_start|> (Reference: Lie-Algebraic Averaging for Globally Consistent Motion Estimation: While motion estimation has been extensively studied in the computer vision literature, the inherent information redundancy in an image sequence has not been well utilised. In particular as many as N(N-1)/2 pairwise relative motions can be estimated efficiently from a sequence of N images. This highly redundant set of observations can be efficiently averaged resulting in fast motion estimation algorithms that are globally consistent. In this paper we demonstrate this using the underlying Lie-group structure of motion representations. The Lie-algebras of the Special Orthogonal and Special Euclidean groups are used to define averages on the Lie-group which in turn gives statistically meaningful, efficient and accurate algorithms for fusing motion information. Using multiple constraints also controls the drift in the solution due to accumulating error. The performance of the method in estimating camera motion is demonstrated on image sequences.) <|cite_end|> <|cite_start|> (Reference: Robustness in Motion Averaging: ) <|cite_end|>.
An advantage of Global SfM is its ability to exploit redundancy. For a viewgraph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ with nodes as camera poses, and edges as two-view pose measurements, we can exploit all of the links in a graph to average out noise and distribute error evenly across the entire graph. For a dataset of $N$ images, there can be up to $\frac{N(N-1)}{2}$ pairs for which the relative motions can be estimated, potentially providing a highly redundant set of observations that can be efficiently averaged <|cite_start|> (Reference: Lie-Algebraic Averaging for Globally Consistent Motion Estimation: While motion estimation has been extensively studied in the computer vision literature, the inherent information redundancy in an image sequence has not been well utilised. In particular as many as N(N-1)/2 pairwise relative motions can be estimated efficiently from a sequence of N images. This highly redundant set of observations can be efficiently averaged resulting in fast motion estimation algorithms that are globally consistent. In this paper we demonstrate this using the underlying Lie-group structure of motion representations. The Lie-algebras of the Special Orthogonal and Special Euclidean groups are used to define averages on the Lie-group which in turn gives statistically meaningful, efficient and accurate algorithms for fusing motion information. Using multiple constraints also controls the drift in the solution due to accumulating error. The performance of the method in estimating camera motion is demonstrated on image sequences.) <|cite_end|>. However, the community has yet to find techniques to use this redundancy to an advantage in accuracy.
Most global SfM systems rely upon rotation averaging <|cite_start|> (Reference: Incremental Rotation Averaging Revisited and More: A New Rotation Averaging Benchmark: In order to further advance the accuracy and robustness of the incremental parameter estimation-based rotation averaging methods, in this paper, a new member of the Incremental Rotation Averaging (IRA) family is introduced, which is termed as IRAv4. As the most significant feature of the IRAv4, a task-specific connected dominating set is extracted to serve as a more reliable and accurate reference for rotation global alignment. In addition, to further address the limitations of the existing rotation averaging benchmark of relying on the slightly outdated Bundler camera calibration results as ground truths and focusing solely on rotation estimation accuracy, this paper presents a new COLMAP-based rotation averaging benchmark that incorporates a cross check between COLMAP and Bundler, and employ the accuracy of both rotation and downstream location estimation as evaluation metrics, which is desired to provide a more reliable and comprehensive evaluation tool for the rotation averaging research. Comprehensive comparisons between the proposed IRAv4 and other mainstream rotation averaging methods on this new benchmark demonstrate the effectiveness of our proposed approach.) <|cite_end|> <|cite_start|> (Reference: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2016, Las Vegas, NV, USA, June 26 - July 1, 2016: ) <|cite_end|> and subsequent translation averaging <|cite_start|> (Reference: Stable Structure from Motion for Unordered Image Collections: ) <|cite_end|> <|cite_start|> (Reference: A global linear method for camera pose registration: We present a linear method for global camera pose registration from pair wise relative poses encoded in essential matrices. Our method minimizes an approximate geometric error to enforce the triangular relationship in camera triplets. This formulation does not suffer from the typical `unbalanced scale' problem in linear methods relying on pair wise translation direction constraints, i.e. an algebraic error, nor the system degeneracy from collinear motion. In the case of three cameras, our method provides a good linear approximation of the trifocal tensor. It can be directly scaled up to register multiple cameras. The results obtained are accurate for point triangulation and can serve as a good initialization for final bundle adjustment. We evaluate the algorithm performance with different types of data and demonstrate its effectiveness. Our system produces good accuracy, robustness, and outperforms some well-known systems on efficiency.) <|cite_end|> <|cite_start|> (Reference: Robust Global Translations with 1DSfM: ) <|cite_end|> <|cite_start|> (Reference: Linear Global Translation Estimation with Feature Tracks: This paper derives a novel linear position constraint for cameras seeing a common scene point, which leads to a direct linear method for global camera translation estimation. Unlike previous solutions, this method deals with collinear camera motion and weak image association at the same time. The final linear formulation does not involve the coordinates of scene points, which makes it efficient even for large scale data. We solve the linear equation based on $L_1$ norm, which makes our system more robust to outliers in essential matrices and feature correspondences. We experiment this method on both sequentially captured images and unordered Internet images. The experiments demonstrate its strength in robustness, accuracy, and efficiency.) <|cite_end|> <|cite_start|> (Reference: ShapeFit and ShapeKick for Robust, Scalable Structure from Motion: We introduce a new method for location recovery from pair-wise directions that leverages an efficient convex program that comes with exact recovery guarantees, even in the presence of adversarial outliers. When pairwise directions represent scaled relative positions between pairs of views (estimated for instance with epipolar geometry) our method can be used for location recovery, that is the determination of relative pose up to a single unknown scale. For this task, our method yields performance comparable to the state-of-the-art with an order of magnitude speed-up. Our proposed numerical framework is flexible in that it accommodates other approaches to location recovery and can be used to speed up other methods. These properties are demonstrated by extensively testing against state-of-the-art methods for location recovery on 13 large, irregular collections of images of real scenes in addition to simulated data with ground truth.) <|cite_end|> for accurate bundle adjustment initialization, although other formulations exists, such as discrete MRF-based methods <|cite_start|> (Reference: The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011: ) <|cite_end|> or hierarchical SfM methods that merge camera clusters formed via incremental SfM <|cite_start|> (Reference: Graph-Based Parallel Large Scale Structure from Motion: While Structure from Motion (SfM) achieves great success in 3D reconstruction, it still meets challenges on large scale scenes. In this work, large scale SfM is deemed as a graph problem, and we tackle it in a divide-and-conquer manner. Firstly, the images clustering algorithm divides images into clusters with strong connectivity, leading to robust local reconstructions. Then followed with an image expansion step, the connection and completeness of scenes are enhanced by expanding along with a maximum spanning tree. After local reconstructions, we construct a minimum spanning tree (MinST) to find accurate similarity transformations. Then the MinST is transformed into a Minimum Height Tree (MHT) to find a proper anchor node and is further utilized to prevent error accumulation. When evaluated on different kinds of datasets, our approach shows superiority over the state-of-the-art in accuracy and efficiency. Our algorithm is open-sourced at https://github.com/AIBluefisher/GraphSfM.) <|cite_end|>. For example, Google's city-scale Street View SfM <|cite_start|> (Reference: Street View Motion-from-Structure-from-Motion: We describe a structure-from-motion framework that handles "generalized" cameras, such as moving rolling-shutter cameras, and works at an unprecedented scale-billions of images covering millions of linear kilometers of roads-by exploiting a good relative pose prior along vehicle paths. We exhibit a planet-scale, appearance-augmented point cloud constructed with our framework and demonstrate its practical use in correcting the pose of a street-level image collection.) <|cite_end|> combined clusters of 1500 cameras. OpenMVG <|cite_start|> (Reference: OpenMVG: Open Multiple View Geometry: ) <|cite_end|> uses a least-squares rotation averaging technique. Other rotation averaging methods have since been proposed, such as Shonan <|cite_start|> (Reference: Shonan Rotation Averaging: Global Optimality by Surfing SO(p)n: ) <|cite_end|> or RCD <|cite_start|> (Reference: Rotation Coordinate Descent for Fast Globally Optimal Rotation Averaging: Under mild conditions on the noise level of the measurements, rotation averaging satisfies strong duality, which enables global solutions to be obtained via semidefinite programming (SDP) relaxation. However, generic solvers for SDP are rather slow in practice, even on rotation averaging instances of moderate size, thus developing specialised algorithms is vital. In this paper, we present a fast algorithm that achieves global optimality called rotation coordinate descent (RCD). Unlike block coordinate descent (BCD) which solves SDP by updating the semidefinite matrix in a row-by-row fashion, RCD directly maintains and updates all valid rotations throughout the iterations. This obviates the need to store a large dense semidefinite matrix. We mathematically prove the convergence of our algorithm and empirically show its superior efficiency over state-of-the-art global methods on a variety of problem configurations. Maintaining valid rotations also facilitates incorporating local optimisation routines for further speed-ups. Moreover, our algorithm is simple to implement; see supplementary material for a demonstration program.) <|cite_end|>. Concurrent work weights two-view rotation measurements by two-view bundle adjustment uncertainties <|cite_start|> (Reference: Revisiting Rotation Averaging: Uncertainties and Robust Losses: In this paper, we revisit the rotation averaging problem applied in global Structure-from-Motion pipelines. We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar geometries.We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging. Such uncertainties are obtained for free by considering the Jacobians of two-view refinements. Moreover, we explore integrating a variant of the MAGSAC loss into the rotation averaging problem, instead of using classical robust losses employed in current frameworks. The proposed method leads to results superior to baselines, in terms of accuracy, on large-scale public benchmarks. The code is public. https://github.com/zhangganlin/GlobalSfMpy) <|cite_end|> in rotation averaging, but we did not find this to yield accuracy gains in our experiments. CReTA <|cite_start|> (Reference: Correspondence Reweighted Translation Averaging: ) <|cite_end|> accounts for outliers in translation averaging by iteratively reweighting point correspondences and thus translation measurements.
\subsection{Outlier Rejection for SfM} \label{ss:outlier_rejection}
Outlier rejection is critical to successful SfM. Not only is it very difficult to triangulate points from inexact camera positions, but bundle adjustment with Gaussian noise models cannot deal with outliers. While incremental systems can reject outliers at each registration stage via reprojection error, global SfM does not enjoy this privilege, and its performance is heavily reliant upon low outlier rates. Global SfM systems instead utilize a number of carefully-crafted outlier rejection techniques to eliminate noisy measurements to prevent them from playing a role in joint optimization.
\noindent \textbf{Relative Pose Consistency} The most common outlier rejection approaches rely upon cycle consistency <|cite_start|> (Reference: Proceedings : 2001 ICRA : IEEE International Conference on Robotics and Automation : May 21〜26, 2001, COEX, Seoul, Korea: ) <|cite_end|> of relative measurements within triplets. For example, the deviation from identity of composed relative rotations in a cycle strongly suggests the magnitude of relative rotation errors <|cite_start|> (Reference: 2011 IEEE International Conference on Computer Vision workshops (ICCV workshops 2011): ) <|cite_end|> <|cite_start|> (Reference: Global fusion of relative motions for robust, accurate and scalable structure from motion: Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties.) <|cite_end|> <|cite_start|> (Reference: OpenMVG: Open Multiple View Geometry: ) <|cite_end|>, as used in OpenMVG and Theia. By accumulating these deviations over a large set of loops one can obtain the statistics needed to infer the set of false positives.
Others generate random spanning trees from relative poses in a RANSAC-like scheme <|cite_start|> (Reference: Stable Structure from Motion for Unordered Image Collections: ) <|cite_end|> for estimating global camera poses, such that ${}^w\mathbf{R}_i = {}^w\mathbf{R}_j \big({}^j \mathbf{R}_i\big)$ roughly holds for as many relative rotations as possible.
Theia <|cite_start|> (Reference: Theia: A Fast and Scalable Structure-from-Motion Library: In this paper, we have presented a comprehensive multi-view geometry library, Theia, that focuses on large-scale SfM. In addition to state-of-the-art scalable SfM pipelines, the library provides numerous tools that are useful for students, researchers, and industry experts in the field of multi-view geometry. Theia contains clean code that is well documented (with code comments and the website) and easy to extend. The modular design allows for users to easily implement and experiment with new algorithms within our current pipeline without having to implement a full end-to-end SfM pipeline themselves. Theia has already gathered a large number of diverse users from universities, startups, and industry and we hope to continue to gather users and active contributors from the open-source community.) <|cite_end|> also uses filtering based on global-to-relative agreement heuristics. 1dSfM <|cite_start|> (Reference: Robust Global Translations with 1DSfM: ) <|cite_end|>
rejects outlier translation directions based on consistent ordering on 1d projections. Instead of using hand-crafted heuristics, Phillips <|cite_start|> (Reference: All Graphs Lead to Rome: Learning Geometric and Cycle-Consistent Representations with Graph Convolutional Networks: Image feature matching is a fundamental part of many geometric computer vision applications, and using multiple images can improve performance. In this work, we formulate multi-image matching as a graph embedding problem then use a Graph Convolutional Network to learn an appropriate embedding function for aligning image features. We use cycle consistency to train our network in an unsupervised fashion, since ground truth correspondence is difficult or expensive to aquire. In addition, geometric consistency losses can be added at training time, even if the information is not available in the test set, unlike previous approaches that optimize cycle consistency directly. To the best of our knowledge, no other works have used learning for multi-image feature matching. Our experiments show that our method is competitive with other optimization based approaches.) <|cite_end|> uses graph neural networks (GNNs) to introduce learning-based cycle consistency on the keypoint match graph, instead of relative pose graph.
\noindent \textbf{Learned Matchability Classifiers} Other methods such as SALVe <|cite_start|> (Reference: SALVe: Semantic Alignment Verification for Floorplan Reconstruction from Sparse Panoramas: We propose a new system for automatic 2D floorplan reconstruction that is enabled by SALVe, our novel pairwise learned alignment verifier. The inputs to our system are sparsely located 360$^\circ$ panoramas, whose semantic features (windows, doors, and openings) are inferred and used to hypothesize pairwise room adjacency or overlap. SALVe initializes a pose graph, which is subsequently optimized using GTSAM. Once the room poses are computed, room layouts are inferred using HorizonNet, and the floorplan is constructed by stitching the most confident layout boundaries. We validate our system qualitatively and quantitatively as well as through ablation studies, showing that it outperforms state-of-the-art SfM systems in completeness by over 200%, without sacrificing accuracy. Our results point to the significance of our work: poses of 81% of panoramas are localized in the first 2 connected components (CCs), and 89% in the first 3 CCs. Code and models are publicly available at https://github.com/zillow/salve.) <|cite_end|> and Doppelgangers <|cite_start|> (Reference: Doppelgangers: Learning to Disambiguate Images of Similar Structures: We consider the visual disambiguation task of determining whether a pair of visually similar images depict the same or distinct 3D surfaces (e.g., the same or opposite sides of a symmetric building). Illusory image matches, where two images observe distinct but visually similar 3D surfaces, can be challenging for humans to differentiate, and can also lead 3D reconstruction algorithms to produce erroneous results. We propose a learning-based approach to visual disambiguation, formulating it as a binary classification task on image pairs. To that end, we introduce a new dataset for this problem, Doppelgangers, which includes image pairs of similar structures with ground truth labels. We also design a network architecture that takes the spatial distribution of local keypoints and matches as input, allowing for better reasoning about both local and global cues. Our evaluation shows that our method can distinguish illusory matches in difficult cases, and can be integrated into SfM pipelines to produce correct, disambiguated 3D reconstructions. See our project page for our code, datasets, and more results: http://doppelgangers-3d.github.io/.) <|cite_end|> align views and predict a matchability confidence for each putative image pair with a ResNet CNN. However, we find the recall of the Doppelgangers pretrained classifier to be too low for use in practice.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/system-overview-small.pdf}
\vspace{-3mm}
\caption{GTSfM system overview. A `bottleneck' indicates that all parallelized tasks from the previous module must be completed before proceeding to the next.}
\label{fig:system-overview}
\end{figure} <|paper_end|> | [
"<|reference_start|> Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation. <|reference_end|>",
"<|reference_start|> A global linear method for camera pose registration: We present a linear method for global camera pose registration from pair wise relative poses encoded in essential matrices. Our method minimizes an approximate geometric error to enforce the triangular relationship in camera triplets. This formulation does not suffer from the typical `unbalanced scale' problem in linear methods relying on pair wise translation direction constraints, i.e. an algebraic error, nor the system degeneracy from collinear motion. In the case of three cameras, our method provides a good linear approximation of the trifocal tensor. It can be directly scaled up to register multiple cameras. The results obtained are accurate for point triangulation and can serve as a good initialization for final bundle adjustment. We evaluate the algorithm performance with different types of data and demonstrate its effectiveness. Our system produces good accuracy, robustness, and outperforms some well-known systems on efficiency. <|reference_end|>",
"<|reference_start|> OpenMVG: Open Multiple View Geometry: <|reference_end|>",
"<|reference_start|> Rotation Coordinate Descent for Fast Globally Optimal Rotation Averaging: Under mild conditions on the noise level of the measurements, rotation averaging satisfies strong duality, which enables global solutions to be obtained via semidefinite programming (SDP) relaxation. However, generic solvers for SDP are rather slow in practice, even on rotation averaging instances of moderate size, thus developing specialised algorithms is vital. In this paper, we present a fast algorithm that achieves global optimality called rotation coordinate descent (RCD). Unlike block coordinate descent (BCD) which solves SDP by updating the semidefinite matrix in a row-by-row fashion, RCD directly maintains and updates all valid rotations throughout the iterations. This obviates the need to store a large dense semidefinite matrix. We mathematically prove the convergence of our algorithm and empirically show its superior efficiency over state-of-the-art global methods on a variety of problem configurations. Maintaining valid rotations also facilitates incorporating local optimisation routines for further speed-ups. Moreover, our algorithm is simple to implement; see supplementary material for a demonstration program. <|reference_end|>"
] | [
21,
47,
54,
56
] | {"<|cite_2|>": "ss-783932", "<|cite_3|>": "arxiv-254624", "<|multi_cite_4_1|>": "arxiv-305883", "<|cite_5|>": "ss-1280839", "<|cite_6|>": "arxiv-201400", "<|multi_cite_7_1|>": "arxiv-333360", "<|multi_cite_7_2|>": "arxiv-416418", "<|multi_cite_7_3|>": "arxiv-446222", "<|multi_cite_7_4|>": "arxiv-411079", "<|multi_cite_7_5|>": "arxiv-484616", "<|multi_cite_7_6|>": "arxiv-469427", "<|multi_cite_8_1|>": "ss-755680", "<|multi_cite_8_2|>": "ss-933766", "<|multi_cite_8_3|>": "ss-1333370", "<|multi_cite_8_4|>": "ss-1527904", "<|multi_cite_8_5|>": "ss-1126932", "<|multi_cite_8_6|>": "ss-1283190", "<|multi_cite_8_7|>": "ss-1263786", "<|multi_cite_11_1|>": "ss-1087183", "<|multi_cite_11_2|>": "ss-775454", "<|multi_cite_11_3|>": "ss-783932", "<|cite_12|>": "ss-783932", "<|cite_13|>": "ss-1277845", "<|cite_14|>": "ss-1277845", "<|cite_15|>": "ss-1125928", "<|cite_16|>": "arxiv-361663", "<|multi_cite_17_1|>": "arxiv-251785", "<|cite_18|>": "ss-755681", "<|cite_19|>": "ss-775454", "<|cite_20|>": "ss-922253", "<|cite_21|>": "ss-783932", "<|cite_23|>": "arxiv-361663", "<|multi_cite_24_1|>": "ss-1087183", "<|multi_cite_24_2|>": "ss-775454", "<|cite_25|>": "ss-1125928", "<|cite_26|>": "ss-1277845", "<|cite_27|>": "ss-783932", "<|cite_29|>": "arxiv-361663", "<|cite_30|>": "ss-2501932", "<|cite_31|>": "arxiv-487434", "<|multi_cite_32_1|>": "ss-755680", "<|multi_cite_32_2|>": "ss-933766", "<|multi_cite_32_3|>": "ss-1333370", "<|cite_33|>": "ss-933766", "<|multi_cite_34_1|>": "ss-1172435", "<|multi_cite_34_2|>": "ss-1521520", "<|multi_cite_35_1|>": "ss-1287133", "<|multi_cite_35_2|>": "ss-1563229", "<|multi_cite_35_3|>": "ss-1685165", "<|multi_cite_35_4|>": "arxiv-74156", "<|multi_cite_35_5|>": "arxiv-103524", "<|cite_36|>": "ss-1527904", "<|cite_37|>": "arxiv-240613", "<|cite_38|>": "ss-2020581", "<|cite_39|>": "ss-1277845", "<|cite_41|>": "ss-774225", "<|cite_42|>": "arxiv-327503", "<|cite_43|>": "arxiv-487434", "<|cite_44|>": "ss-2501932", "<|cite_45|>": "ss-1110998", "<|multi_cite_46_1|>": "ss-1126932", "<|multi_cite_46_2|>": "ss-1283190", "<|multi_cite_46_3|>": "ss-1277845", "<|cite_48|>": "ss-1287133", "<|cite_49|>": "ss-1125928", "<|cite_50|>": "ss-1685165", "<|cite_51|>": "ss-1881411", "<|cite_52|>": "arxiv-632930", "<|cite_53|>": "arxiv-536803"} |
2103.08031 | <|paper_start|> Title: BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks
Abstract: BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks: Deploying convolutional neural networks (CNNs) for embedded applications presents many challenges in balancing resource-efficiency and task-related accuracy. These two aspects have been well-researched in the field of CNN compression. In real-world applications, a third important aspect comes into play, namely the robustness of the CNN. In this paper, we thoroughly study the robustness of uncompressed, distilled, pruned and binarized neural networks against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool, LocalSearch and GenAttack). These new insights facilitate defensive training schemes or reactive filtering methods, where the attack is detected and the input is discarded and/or cleaned. Experimental results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks (BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets. We present evaluation methods to simplify the comparison between CNNs under different attack schemes using loss/accuracy levels, stress-strain graphs, box-plots and class activation mapping (CAM). Our analysis reveals susceptible behavior of uncompressed and pruned CNNs against all kinds of attacks. The distilled models exhibit their strength against all white box attacks with an exception of C&W. Furthermore, binary neural networks exhibit resilient behavior compared to their baselines and other compressed variants.
Introduction
Neural network compression is an extensively studied topic for reducing the computational complexity <|cite_start|> (Reference: XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks: We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.) <|cite_end|> <|cite_start|> (Reference: Towards Accurate Binary Convolutional Neural Network: We introduce a novel scheme to train binary convolutional neural networks (CNNs) -- CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.) <|cite_end|> <|cite_start|> (Reference: Binarized Neural Networks: We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time and when computing the parameters' gradient at train-time. We conduct two sets of experiments, each based on a different framework, namely Torch7 and Theano, where we train BNNs on MNIST, CIFAR-10 and SVHN, and achieve nearly state-of-the-art results. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which might lead to a great increase in power-efficiency. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available.) <|cite_end|>, the memory demand <|cite_start|> (Reference: Optimal {Brain} {Damage}: We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.) <|cite_end|> <|cite_start|> (Reference: AMC: AutoML for Model Compression and Acceleration on Mobile Devices: Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted heuristics and rule-based policies that require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor. Under 4x FLOPs reduction, we achieved 2.7% better accuracy than the handcrafted model compression policy for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved 1.81x speedup of measured inference latency on an Android phone and 1.43x speedup on the Titan XP GPU, with only 0.1% loss of ImageNet Top-1 accuracy.) <|cite_end|> <|cite_start|> (Reference: Learning both Weights and Connections for Efficient Neural
Network: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.) <|cite_end|> and/or the energy consumption <|cite_start|> (Reference: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): We describe a tracking algorithm to address the interactions among objects, and to track them individually and confidently via a static camera. It is achieved by constructing an invariant bipartite graph to model the dynamics of the tracking process, of which the nodes are classified into objects and profiles. The best match of the graph corresponds to an optimal assignment for resolving the identities of the detected objects. Since objects may enter/exit the scene indefinitely, or when interactions occur/conclude they could form/leave a group, the number of nodes in the graph changes dynamically. Therefore it is critical to maintain an invariant property to assure that the numbers of nodes of both types are kept the same so that the matching problem is manageable. In addition, several important issues are also discussed, including reducing the effect of shadows, extracting objects’ shapes, and adapting large abrupt changes in the scene background. Finally, experimental results are provided to illustrate the efficiency of our approach.) <|cite_end|> of deep neural networks (DNN) deployed on embedded systems.
These aspects widen the potential for DNN applications in real-world scenarios. Particularly in the field of robotics and autonomous driving, increasingly deeper and larger convolutional neural networks (CNNs) are deployed on resource-constrained hardware platforms, enabling computer vision-based applications, such as pedestrian detection or free-space detection.
Systems in autonomous vehicles are safety critical, maintaining zero-tolerance for potential threats to functional safety.
Attacking (breaking) neural networks can be done by injecting small perturbations to their inputs, referred to as adversarial attacks <|cite_start|> (Reference: Intriguing properties of neural networks: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.) <|cite_end|>. Under the assumption of varying degrees of information on the CNN and the accessibility of its internal parameters, several \emph{black-box} (GenAttack <|cite_start|> (Reference: GenAttack: Practical Black-box Attacks with Gradient-Free Optimization: Deep neural networks are vulnerable to adversarial examples, even in the black-box setting, where the attacker is restricted solely to query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or performing gradient estimation. We introduce GenAttack, a gradient-free optimization technique that uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches. Against MNIST and CIFAR-10 models, GenAttack required roughly 2,126 and 2,568 times fewer queries respectively, than ZOO, the prior state-of-the-art black-box attack. In order to scale up the attack to large-scale high-dimensional ImageNet models, we perform a series of optimizations that further improve the query efficiency of our attack leading to 237 times fewer queries against the Inception-v3 model than ZOO. Furthermore, we show that GenAttack can successfully attack some state-of-the-art ImageNet defenses, including ensemble adversarial training and non-differentiable or randomized input transformations. Our results suggest that evolutionary algorithms open up a promising area of research into effective black-box attacks.) <|cite_end|>, LocalSearch <|cite_start|> (Reference: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022: ) <|cite_end|>) and \emph{white-box} (FGSM <|cite_start|> (Reference: Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.) <|cite_end|>, DeepFool <|cite_start|> (Reference: DeepFool: a simple and accurate method to fool deep neural networks: State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.) <|cite_end|> and Carlini \& Wagner <|cite_start|> (Reference: 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017: ) <|cite_end|>) attacks are potential threats. Understanding these threats helps to develop pro-active <|cite_start|> (Reference: Adversarially Robust Distillation: Knowledge distillation is effective for producing small, high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks. This paper studies how adversarial robustness transfers from teacher to student during knowledge distillation. We find that a large amount of robustness may be inherited by the student even when distilled on only clean images. Second, we introduce Adversarially Robust Distillation (ARD) for distilling robustness onto student networks. In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student. In our experiments, we find that ARD student models decisively outperform adversarially trained networks of identical architecture in terms of robust accuracy, surpassing state-of-the-art methods on standard robustness benchmarks. Finally, we adapt recent fast adversarial training methods to ARD for accelerated robust distillation.) <|cite_end|> and re-active <|cite_start|> (Reference: Extending Defensive Distillation: Machine learning is vulnerable to adversarial examples: inputs carefully modified to force misclassification. Designing defenses against such inputs remains largely an open problem. In this work, we revisit defensive distillation---which is one of the mechanisms proposed to mitigate adversarial examples---to address its limitations. We view our results not only as an effective way of addressing some of the recently discovered attacks but also as reinforcing the importance of improved training techniques.) <|cite_end|> methods to defend against adversarial examples and thereby improve CNN robustness.
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth]{img/BreakingBED.pdf}
\caption{Experimental setup of \emph{BreakingBED} for breaking binary (\textcolor{green}{C}) and efficient (\textcolor{green}{A}) and (\textcolor{green}{B}) DNNs attacked with white-box (FGSM, PGD and C\&W) and black-box (LocalSearch and GenAttack) adversarial attacks. Evaluated by using loss/accuracy levels, stress-strain graphs, box-plots and class activation mapping (CAM).}
\label{fig:breaking_bed}
\vspace{-3.5ex}
\end{figure*}
Recent works investigated the mitigation of such threats through robust training of neural networks <|cite_start|> (Reference: Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels: Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training. Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels. Therefore in this paper, we propose a new deep learning paradigm called Co-teaching for combating with noisy labels. Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself. Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.) <|cite_end|> and robust neural architecture search (NAS) techniques <|cite_start|> (Reference: When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks: Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks. Since then, extensive efforts have been devoted to enhancing the robustness of deep networks via specialized learning algorithms and loss functions. In this work, we take an architectural perspective and investigate the patterns of network architectures that are resilient to adversarial attacks. To obtain the large number of networks needed for this study, we adopt one-shot neural architecture search, training a large network for once and then finetuning the sub-networks sampled therefrom. The sampled architectures together with the accuracies they achieve provide a rich basis for our study. Our "robust architecture Odyssey" reveals several valuable observations: 1) densely connected patterns result in improved robustness; 2) under computational budget, adding convolution operations to direct connection edge is effective; 3) flow of solution procedure (FSP) matrix is a good indicator of network robustness. Based on these observations, we discover a family of robust architectures (RobNets). On various datasets, including CIFAR, SVHN, Tiny-ImageNet, and ImageNet, RobNets exhibit superior robustness performance to other widely used architectures. Notably, RobNets substantially improve the robust accuracy (~5% absolute gains) under both white-box and black-box attacks, even with fewer parameter numbers. Code is available at https://github.com/gmh14/RobNets.) <|cite_end|>. In <|cite_start|> (Reference: Defensive Quantization: When Efficiency Meets Robustness: Neural network quantization is becoming an industry standard to efficiently deploy deep learning models on hardware platforms, such as CPU, GPU, TPU, and FPGAs. However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people's awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. We first conduct an empirical study to show that vanilla quantization suffers more from adversarial attacks. We observe that the inferior robustness comes from the error amplification effect, where the quantization operation further enlarges the distance caused by amplified noise. Then we propose a novel Defensive Quantization (DQ) method by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference. Extensive experiments on CIFAR-10 and SVHN datasets demonstrate that our new quantization method can defend neural networks against adversarial examples, and even achieves superior robustness than their full-precision counterparts while maintaining the same hardware efficiency as vanilla quantization approaches. As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack.) <|cite_end|>, the authors compress neural networks through robust quantization, lowering the computational complexity while maintaining good performance against potential attacks. Further investigations on the robustness of binary neural networks (BNNs) were carried out in <|cite_start|> (Reference: Attacking Binarized Neural Networks: Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to $\pm$1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.) <|cite_end|>, where BNNs were attacked with white-box (FGSM <|cite_start|> (Reference: Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.) <|cite_end|> and C\&W <|cite_start|> (Reference: 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017: ) <|cite_end|>) and black-box <|cite_start|> (Reference: Practical Black-Box Attacks against Machine Learning: Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.) <|cite_end|> techniques. The robustness of BNNs was concluded, albeit on basic adverserially trained networks from <|cite_start|> (Reference: Practical Black-Box Attacks against Machine Learning: Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.) <|cite_end|> and a small set of attacks.
In order to get a deeper understanding of the effectiveness of adversarial attacks (Sec.~\ref{sec:adversarial_attacks}), applied to binary and efficient DNNs (Sec.~\ref{sec:compression}), we perform an extensive set of
robustness evaluation experiments. In detail, we expose vanilla full-precision, distilled, pruned and binary DNNs to a variety of adversarial attacks in Sec.~\ref{sec:experiments}. <|paper_end|> | [
"<|reference_start|> IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022: <|reference_end|>",
"<|reference_start|> DeepFool: a simple and accurate method to fool deep neural networks: State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust. <|reference_end|>",
"<|reference_start|> Attacking Binarized Neural Networks: Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to $\\pm$1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients. <|reference_end|>",
"<|reference_start|> Practical Black-Box Attacks against Machine Learning: Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. <|reference_end|>"
] | [
9,
11,
18,
22
] | {"<|multi_cite_1_1|>": "arxiv-94105", "<|multi_cite_1_2|>": "arxiv-141763", "<|multi_cite_1_3|>": "arxiv-91785", "<|multi_cite_2_1|>": "ss-1117443", "<|multi_cite_2_2|>": "arxiv-147830", "<|multi_cite_2_3|>": "ss-700765", "<|cite_3|>": "ss-680332", "<|cite_4|>": "arxiv-54384", "<|cite_5|>": "arxiv-160340", "<|cite_6|>": "ss-885131", "<|cite_7|>": "arxiv-70555", "<|cite_8|>": "arxiv-87203", "<|cite_9|>": "ss-754153", "<|cite_10|>": "arxiv-205544", "<|cite_11|>": "arxiv-124128", "<|cite_12|>": "arxiv-155481", "<|cite_13|>": "arxiv-236047", "<|cite_14|>": "arxiv-200452", "<|cite_15|>": "arxiv-138896", "<|cite_16|>": "arxiv-70555", "<|cite_17|>": "ss-754153", "<|cite_18|>": "arxiv-91829", "<|cite_19|>": "arxiv-91829"} |
2107.01559 | <|paper_start|> Title: Smoothed Differential Privacy
Abstract: Smoothed Differential Privacy: Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis. Often, DP classifies most mechanisms without additive noise as non-private (Dwork et al., 2014). Thus, additive noises are added to improve privacy (to achieve DP). However, in many real-world applications, adding additive noise is undesirable (Bagdasaryan et al., 2019) and sometimes prohibited (Liu et al., 2020). In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis (Spielman & Teng, May 2004). Our notion, smoothed DP, can effectively measure the privacy leakage of mechanisms without additive noises under realistic settings. We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP. In addition, we prove several desirable properties of smoothed DP, including composition, robustness to post-processing, and distribution reduction. Based on those properties, we propose an efficient algorithm to calculate the privacy parameters for smoothed DP. Experimentally, we verify that, according to smoothed DP, the discrete sampling mechanisms are private in real-world elections, and some discrete neural networks can be private without adding any additive noise. We believe that these results contribute to the theoretical foundation of realistic privacy measures beyond worst-case analysis.
Introduction
\emph{Differential privacy (DP)}, a \emph{de facto} measure of privacy in academia and industry, is often achieved by adding external noises to published
information <|cite_start|> (Reference: The Algorithmic Foundations of Differential Privacy.: The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.) <|cite_end|>.
However, external noises are procedurally or practically unacceptable in many real-world applications. For example, presidential elections often require a deterministic rule to be used <|cite_start|> (Reference: How Private Are Commonly-Used Voting Rules?: Differential privacy has been widely applied to provide privacy guarantees by adding random noise to the function output. However, it inevitably fails in many high-stakes voting scenarios, where voting rules are required to be deterministic. In this work, we present the first framework for answering the question: "How private are commonly-used voting rules?" Our answers are two-fold. First, we show that deterministic voting rules provide sufficient privacy in the sense of distributional differential privacy (DDP). We show that assuming the adversarial observer has uncertainty about individual votes, even publishing the histogram of votes achieves good DDP. Second, we introduce the notion of exact privacy to compare the privacy preserved in various commonly-studied voting rules, and obtain dichotomy theorems of exact DDP within a large subset of voting rules called generalized scoring rules.) <|cite_end|>.
In such cases, though, {\em internal noise} often exists, as shown in the following example.
\begin{example}[\bf Election with Internal Noise]
\label{ex:internal-noise}
Due to COVID-19, many voters in the 2020 US presidential election chose to submit their votes by mail. Unfortunately, it was estimated that the US postal service might have lost up to 300,000 mail-in ballots ($0.2\%$ of all votes). For the purpose of illustration, suppose these votes are distributed uniformly at random, and the histogram of votes is announced after the election day.
\end{example}
A critical public concern about elections is: should publishing the histogram of votes be viewed as a significant threat to privacy?
Notice that with internal noise such as in Example~\ref{ex:internal-noise}, the histogram mechanism can be viewed as a randomized mechanism, formally called {\em sampling-histogram} in this paper. The same question can be asked about publishing the winner under a deterministic voting rule, where the input is sampled from the real votes.
If we apply DP to answer this question, we would then conclude that publishing the histogram {\em can} pose a significant threat to privacy, as the privacy parameter $\delta \approx 1$ (See Section~\ref{sec:DP} for the formal definition) in the following worst-case scenario: all except one votes are for the Republican candidate, and there is one vote for the Democratic candidate.
Notice that $\delta \approx 1$ is much worse than the threshold for private mechanisms, $\delta = o(1/n)$ ($n$ is the number of agents). Moreover, using the adversary's utility as the measure of privacy loss (see Section~\ref{sec:DP} for the formal definition), in this (worst) case, the privacy loss is large ($\approx 1$, see Section~\ref{sec:DP} for the formal definition of utility), which means the adversary can make accurate predictions about every agent's preferences.
However, DP does not tell us whether publishing the histogram poses a significant threat to privacy {\em in general}. In particular, the worst-case scenario described in the previous paragraph never happened even approximately in the modern history of US presidential elections.
In fact, no candidates get more than 70\% of the votes since 1920, when the progressive party dissolved.
It turns out the privacy loss (measured by the adversary's utility) may not be as high as measured by DP. To see this, suppose $0.2\%$ of the votes were randomly lost in the presidential elections of each year since 1920 (in light of Example~\ref{ex:internal-noise}), we present the adversary's utility of predicting the unknown votes
in Figure~\ref{fig:motivating}. It can be seen that the adversary has very limited utility (at the order of $10^{-32}$ to $10^{-8}$, which is always smaller than the threshold of private mechanisms $n^{-1}$), meaning that the adversary cannot learn much from the published histogram of votes.
We also observe an interesting decreasing trend in $\delta$, which implies that the elections become more private in more recent years. This is primarily due to the growth of voting population, which is exponentially related to the adversary's utility (Theorem~\ref{theo:main}). In Appendix~\ref{app:motivate}, we show that the elections are still private when only $0.01\%$ of votes got lost.
\begin{figure}[H]
\centering
\includegraphics[width = 0.44 \textwidth]{fig_motivating_1.pdf}
\caption{The privacy loss in US presidential elections. The lower $\delta$ is, the more private the election is.}
\label{fig:motivating}
\end{figure}
As another example, for \emph{deep neural networks} (DNNs), even adding slight noise can lead to dramatic decreases in the prediction accuracy, especially when predicting underrepresented classes <|cite_start|> (Reference: Differential Privacy Has Disparate Impact on Model Accuracy: Differential privacy (DP) is a popular mechanism for training machine learning models with bounded leakage about the presence of specific points in the training data. The cost of differential privacy is a reduction in the model's accuracy. We demonstrate that in the neural networks trained using differentially private stochastic gradient descent (DP-SGD), this cost is not borne equally: accuracy of DP models drops much more for the underrepresented classes and subgroups. For example, a gender classification model trained using DP-SGD exhibits much lower accuracy for black faces than for white faces. Critically, this gap is bigger in the DP model than in the non-DP model, i.e., if the original model is unfair, the unfairness becomes worse once DP is applied. We demonstrate this effect for a variety of tasks and models, including sentiment analysis of text and image classification. We then explain why DP training mechanisms such as gradient clipping and noise addition have disproportionate effect on the underrepresented and more complex subgroups, resulting in a disparate reduction of model accuracy.) <|cite_end|>.
Internal noises also widely exist in machine learning, for example, in the standard practice of cross-validation as well as in training (e.g., the batch-sampling when training DNNs).
As shown in these examples, the worst-case privacy according to DP might be too loose to serve as a practical measure for evaluating and comparing mechanisms without external noises in real-world applications. This motivates us to ask the following question.
\vspace{0.4em}
\centerline
{\textbf{\noindent\em
How can we measure privacy for mechanisms }}
\centerline{\textbf{\noindent\em without external noise under realistic models?}}
The choice of model is critical and highly challenging. A model based on worst-case analysis (such as in DP) provides upper bounds on privacy loss, but as we have seen in Figure~\ref{fig:motivating}, in some situations, such upper bounds are too loose to be informative in practice. This is similar to the runtime analysis of an algorithm---an algorithm with exponential worst-case runtime, such as the simplex algorithm, can be faster than some algorithms with polynomial runtime in practice.
Average-case analysis is a natural choice of the model, but since ``{\em all models are wrong}'' <|cite_start|> (Reference: Robustness in the Strategy of Scientific Model Building.: ) <|cite_end|>, any privacy measure designed for a certain distribution over data may not work well for other distributions. Moreover, ideally the new measure should satisfy the desirable properties that played a central role behind the success of DP, including \emph{composability} and \emph{robustness to post-processing}. These properties make it easier for the mechanism designers to figure out the privacy level of mechanisms. Unfortunately, we are not aware of a measure based on average-case analysis that has these properties.
We believe that the {\em smoothed analysis} <|cite_start|> (Reference: The smoothed analysis of algorithms: Theorists have long been challenged by the existence of remarkable algorithms that are known by scientists and engineers to work well in practice, but whose theoretical analyses have been are negative or unconvincing. The root of the problem is that algorithms are usually analyzed in one of two ways: by worst-case or average-case analysis. The former can improperly suggest that an algorithm will perform poorly, while the latter can be unconvincing because the random inputs it considers may fail to resemble those encountered in practice.) <|cite_end|> provides a promising framework for addressing this question. Smoothed analysis is an extension and combination of worst-case and average-case analyses that inherits advantages of both. It measures the expected performance of algorithms under slight random perturbations of worst-case inputs. Compared with the average-case analysis, the assumptions of the smoothed analysis are much more natural. Compared with the worst-case analysis, the smoothed analysis can better describe the real-world performance of algorithms. For example, it successfully explained why the simplex algorithm are faster than some polynomial algorithms in practice.
\textbf{Our Contributions.} The main merit of this paper is a new notion of privacy for mechanisms without external noises, called \emph{smoothed differential privacy} ({\em smoothed DP} for short), which applies smoothed analysis to the privacy parameter $\delta(x)$ (Definition~\ref{def:deltax}) as a function of the database $x$.
In our model, the ``ground truth'' distribution of agents is from a set of distributions $\Pi$ over data points,
on top of which the nature adds random noises.
Formally, the ``smoothed'' $\delta(x)$ is defined as
\begin{equation}\nonumber
\dsdp \triangleq \max\nolimits_{\vpi}\big(\,\E_{x\sim \vpi} \left[\delta(x)\right]\big),
\end{equation}
where $x\sim\vpi = \left(\pi_1,\cdots,\pi_\nagent\right)\in\Pi^{\nagent}$ means that for every $1\le i\le n$, the $i$-th entry in the database independently follows the distribution $\pi_i$.
{\bf Theoretically,} we prove that smoothed DP satisfies many
desirable properties, including two properties also satisfied by the standard DP: \emph{robustness to post-processing} (Proposition~\ref{prop:post-pro}) and \emph{composability} (Proposition~\ref{prop:composition}). In addition, we prove two unique properties for smoothed DP, called \emph{pre-processing} (Proposition~\ref{prop:pre-process}) and \emph{distribution reduction} (Proposition~\ref{prop:CH}),
which makes it easier for the mechanism designer to figure out the privacy level when the set of distributions $\Pi$ is hard to estimate. Using smoothed DP, we found that many discrete mechanisms without external noise (and with small internal noise) are significantly more private than those guaranteed by DP. For example, the {\SH} mechanism in Example~\ref{ex:internal-noise} has an exponentially small $\dsdp$ (Theorem~\ref{theo:main}), which implies that the mechanism protects voters' privacy in elections---and this is in accordance with the observation on US election data in Figure~\ref{fig:motivating}.
We also note that the {\SH} mechanism is widely used in machine learning (e.g., the SGD in quantized DNNs). In comparison, smoothed DP implies a similar privacy level as the standard DP in many continuous mechanisms. We proved that smoothed DP and the standard DP have the same privacy level for the widely-used sampling-average mechanism when the inputs are continuous (Theorem~\ref{theo:continuous}).
{\bf Experimentally}, we numerically evaluate the privacy level of the {\SH} mechanism using US presidential election data. Simulation results show an exponentially small $\dsdp$, which is in accordance with our Theorem~\ref{theo:main}. Our second experiment shows that a one-step \emph{stochastic gradient descent} (SGD) in quantized DNNs <|cite_start|> (Reference: Scalable Methods for 8-bit Training of Neural Networks: Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range Batch-Normalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors' knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset.) <|cite_end|> <|cite_start|> (Reference: Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.) <|cite_end|> also has an exponentially small $\dsdp$. This result implies that SGD with gradient quantization can already be private in practice, without adding any external noise. In comparison, the standard DP notion always requires extra (external) noise to make the network private at the cost of a significant reduction in accuracy.
\textbf{Related Work and Discussions.} There is a large body of literature in the theory and practice of DP and its extensions.
We believe that the smoothed DP introduced in this paper is novel. To the best of our knowledge, it appears to be most similar with {\em distributional DP} <|cite_start|> (Reference: Coupled-Worlds Privacy: Exploiting Adversarial Uncertainty in Statistical
Data Privacy: We propose a new framework for defining privacy in statistical databases that enables reasoning about and exploiting adversarial uncertainty about the data. Roughly, our framework requires indistinguishability of the real world in which a mechanism is computed over the real dataset, and an ideal world in which a simulator outputs some function of a "scrubbed" version of the dataset (e.g., one in which an individual user's data is removed). In each world, the underlying dataset is drawn from the same distribution in some class (specified as part of the definition), which models the adversary's uncertainty about the dataset. We argue that our framework provides meaningful guarantees in a broader range of settings as compared to previous efforts to model privacy in the presence of adversarial uncertainty. We also show that several natural, "noiseless" mechanisms satisfy our definitional framework under realistic assumptions on the distribution of the underlying data.) <|cite_end|>, which measures privacy given the adversary's (probabilistic) belief about the data he/she is interested in. Our smoothed DP is different both conceptually and technically. Conceptually, the adversary in distributional DP only has probabilistic information about the database and is much weaker than the smoothed DP adversary, who has complete information. Technically, distributional DP considers randomness in both the mechanism and the adversary's belief about the database, while smoothed DP only considers the randomness in the dataset. We prove that smoothed DP servers as an upper bound to distributional DP (Proposition~\ref{prop:relationship}).
\Renyi DP <|cite_start|> (Reference: {R{\'e: 이 연구에서는 R&E 등을 통해 연구 수행과정을 경험하고 있는 영재 고등학교에 재학 중인 과학 영재 267명을 대상으로, 이들이 연구윤리를 잘 준수하고 있는지, 또 연구윤리를 잘 알고 있는지의 여부와 연구윤리를 학습하고 싶은지, 만약 그렇다면 어떤 내용을 배우고 싶은지를 설문을 통하여 알아보았다. 그 결과, 과학영재는 위조, 변조, 표절, 부당한 논문저자표시, 연구부정행위의 묵인 항목에 대하여 45.31%가 경험이 있다고 응답하였으나, 과학자의 연구윤리에 대해서는 90% 내외의 학생이 제시된 항목에 대해 자기표절을 제외하고 모두 문제가 있는 행위라고 응답하였다. 즉 이들은 연구부정행위에 대해 알고 있음에도 불구하고 절반 가까운 학생이 연구윤리를 위반한 경험이 있음을 알 수 있다. 하지만 연구윤리를 배우고 싶다고 응답한 학생은 28.83%뿐이었고, 이들은 좀 더 실질적이고 구체적인 준수방법을 알고 싶다고 응답하였다. 따라서 과학영재들이 책임감 있는 연구수행을 할 수 있도록 하려면, 영재교육연구자들은 이러한 과학영재들의 목소리를 반영하여 연구윤리교육방안을 마련하여야 할 것이다.) <|cite_end|>, Gaussian DP <|cite_start|> (Reference: Gaussian Differential Privacy: Differential privacy has seen remarkable success as a rigorous and practical formalization of data privacy in the past decade. This privacy definition and its divergence based relaxations, however, have several acknowledged weaknesses, either in handling composition of private algorithms or in analyzing important primitives like privacy amplification by subsampling. Inspired by the hypothesis testing formulation of privacy, this paper proposes a new relaxation, which we term `$f$-differential privacy' ($f$-DP). This notion of privacy has a number of appealing properties and, in particular, avoids difficulties associated with divergence based relaxations. First, $f$-DP preserves the hypothesis testing interpretation. In addition, $f$-DP allows for lossless reasoning about composition in an algebraic fashion. Moreover, we provide a powerful technique to import existing results proven for original DP to $f$-DP and, as an application, obtain a simple subsampling theorem for $f$-DP. In addition to the above findings, we introduce a canonical single-parameter family of privacy notions within the $f$-DP class that is referred to as `Gaussian differential privacy' (GDP), defined based on testing two shifted Gaussians. GDP is focal among the $f$-DP class because of a central limit theorem we prove. More precisely, the privacy guarantees of \emph{any} hypothesis testing based definition of privacy (including original DP) converges to GDP in the limit under composition. The CLT also yields a computationally inexpensive tool for analyzing the exact composition of private algorithms. Taken together, this collection of attractive properties render $f$-DP a mathematically coherent, analytically tractable, and versatile framework for private data analysis. Finally, we demonstrate the use of the tools we develop by giving an improved privacy analysis of noisy stochastic gradient descent.) <|cite_end|> and Concentrated DP <|cite_start|> (Reference: Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds: "Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy.") <|cite_end|> <|cite_start|> (Reference: Concentrated Differential Privacy: We introduce Concentrated Differential Privacy, a relaxation of Differential Privacy enjoying better accuracy than both pure differential privacy and its popular "(epsilon,delta)" relaxation without compromising on cumulative privacy loss over multiple computations.) <|cite_end|> target to provide tighter privacy bounds for the adaptive mechanisms. Those three notions generalized the $(\epsilon,\delta)$ measure of distance between distributions to other divergence measures. Bayesian DP <|cite_start|> (Reference: Bayesian Differential Privacy for Machine Learning: Traditional differential privacy is independent of the data distribution. However, this is not well-matched with the modern machine learning context, where models are trained on specific data. As a result, achieving meaningful privacy guarantees in ML often excessively reduces accuracy. We propose Bayesian differential privacy (BDP), which takes into account the data distribution to provide more practical privacy guarantees. We also derive a general privacy accounting method under BDP, building upon the well-known moments accountant. Our experiments demonstrate that in-distribution samples in classic machine learning datasets, such as MNIST and CIFAR-10, enjoy significantly stronger privacy guarantees than postulated by DP, while models maintain high classification accuracy.) <|cite_end|> tries to provide an ``affordable'' measure of privacy that requires less external noises than DP. With similar objectives, <|cite_start|> (Reference: Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation: The simplest and most widely applied method for guaranteeing differential privacy is to add instance-independent noise to a statistic of interest that is scaled to its global sensitivity. However, global sensitivity is a worst-case notion that is often too conservative for realized dataset instances. We provide methods for scaling noise in an instance-dependent way and demonstrate that they provide greater accuracy under average-case distributional assumptions. Specifically, we consider the basic problem of privately estimating the mean of a real distribution from i.i.d.~samples. The standard empirical mean estimator can have arbitrarily-high global sensitivity. We propose the trimmed mean estimator, which interpolates between the mean and the median, as a way of attaining much lower sensitivity on average while losing very little in terms of statistical accuracy. To privately estimate the trimmed mean, we revisit the smooth sensitivity framework of Nissim, Raskhodnikova, and Smith (STOC 2007), which provides a framework for using instance-dependent sensitivity. We propose three new additive noise distributions which provide concentrated differential privacy when scaled to smooth sensitivity. We provide theoretical and experimental evidence showing that our noise distributions compare favorably to others in the literature, in particular, when applied to the mean estimation problem.) <|cite_end|> adds noises according to the average sensitivity instead of the worst-case sensitivity required by DP. However, external noises are required in <|cite_start|> (Reference: Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation: The simplest and most widely applied method for guaranteeing differential privacy is to add instance-independent noise to a statistic of interest that is scaled to its global sensitivity. However, global sensitivity is a worst-case notion that is often too conservative for realized dataset instances. We provide methods for scaling noise in an instance-dependent way and demonstrate that they provide greater accuracy under average-case distributional assumptions. Specifically, we consider the basic problem of privately estimating the mean of a real distribution from i.i.d.~samples. The standard empirical mean estimator can have arbitrarily-high global sensitivity. We propose the trimmed mean estimator, which interpolates between the mean and the median, as a way of attaining much lower sensitivity on average while losing very little in terms of statistical accuracy. To privately estimate the trimmed mean, we revisit the smooth sensitivity framework of Nissim, Raskhodnikova, and Smith (STOC 2007), which provides a framework for using instance-dependent sensitivity. We propose three new additive noise distributions which provide concentrated differential privacy when scaled to smooth sensitivity. We provide theoretical and experimental evidence showing that our noise distributions compare favorably to others in the literature, in particular, when applied to the mean estimation problem.) <|cite_end|> and <|cite_start|> (Reference: Bayesian Differential Privacy for Machine Learning: Traditional differential privacy is independent of the data distribution. However, this is not well-matched with the modern machine learning context, where models are trained on specific data. As a result, achieving meaningful privacy guarantees in ML often excessively reduces accuracy. We propose Bayesian differential privacy (BDP), which takes into account the data distribution to provide more practical privacy guarantees. We also derive a general privacy accounting method under BDP, building upon the well-known moments accountant. Our experiments demonstrate that in-distribution samples in classic machine learning datasets, such as MNIST and CIFAR-10, enjoy significantly stronger privacy guarantees than postulated by DP, while models maintain high classification accuracy.) <|cite_end|>.
Quantized neural networks <|cite_start|> (Reference: Fast Neural Networks without Multipliers: Multilayer perceptrons (MLPs) with weight values restricted to powers of two or sums of powers of two are introduced. In a digital implementation, these neural networks do not need multipliers but only shift registers when computing in forward mode, thus saving chip area and computation time. A learning procedure, based on backpropagation, is presented for such neural networks. This learning procedure requires full real arithmetic and therefore must be performed offline. Some test cases are presented, concerning MLPs with hidden layers of different sizes, on pattern recognition problems. Such tests demonstrate the validity and the generalization capability of the method and give some insight into the behavior of the learning algorithm.) <|cite_end|> <|cite_start|> (Reference: Multilayer feedforward neural networks with single powers-of-two weights: A new algorithm for designing multilayer feedforward neural networks with single powers-of-two weights is presented. By applying this algorithm, the digital hardware implementation of such networks becomes easier as a result of the elimination of multipliers. This proposed algorithm consists of two stages. First, the network is trained by using the standard backpropagation algorithm. Weights are then quantized to single powers-of-two values, and weights and slopes of activation functions are adjusted adaptively to reduce the sum of squared output errors to a specified level. Simulation results indicate that the multilayer feedforward neural networks with single powers-of-two weights obtained using the proposed algorithm have generalization performance similar to that of the original networks with continuous weights. >) <|cite_end|> <|cite_start|> (Reference: Weight quantization in Boltzmann machines: ) <|cite_end|> <|cite_start|> (Reference: Weight discretization paradigm for optical neural networks: Neural networks are a primary candidate architecture for optical computing. One of the major problems in using neural networks for optical computers is that the information holders: the interconnection strengths (or weights) are normally real valued (continuous), whereas optics (light) is only capable of representing a few distinguishable intensity levels (discrete). In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretization levels. The number of interconnections in a (fully connected) neural network grows quadratically with the number of neurons of the network. Optics can handle a large number of interconnections because of the fact that light beams do not interfere with each other. A vast amount of light beams can therefore be used per unit of area. However the number of different values one can represent in a light beam is very limited. A flexible, portable (machine independent) neural network software package which is capable of weight discretization, is presented. The development of the software and some experiments have been done on personal computers. The major part of the testing, which requires a lot of computation, has been done using a CRAY X-MP/24 super computer.) <|cite_end|> are initially designed to make hardware implementations of DNNs easier. In the recent decade, quantized neural networks becomes a research hotspot again owing to its growing applications on mobile devises <|cite_start|> (Reference: Binarized Neural Networks: We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time and when computing the parameters' gradient at train-time. We conduct two sets of experiments, each based on a different framework, namely Torch7 and Theano, where we train BNNs on MNIST, CIFAR-10 and SVHN, and achieve nearly state-of-the-art results. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which might lead to a great increase in power-efficiency. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available.) <|cite_end|> <|cite_start|> (Reference: Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.) <|cite_end|> <|cite_start|> (Reference: A Survey on Methods and Theories of Quantized Neural Networks: Deep neural networks are the state-of-the-art methods for many real-world tasks, such as computer vision, natural language processing and speech recognition. For all its popularity, deep neural networks are also criticized for consuming a lot of memory and draining battery life of devices during training and inference. This makes it hard to deploy these models on mobile or embedded devices which have tight resource constraints. Quantization is recognized as one of the most effective approaches to satisfy the extreme memory requirements that deep neural network models demand. Instead of adopting 32-bit floating point format to represent weights, quantized representations store weights using more compact formats such as integers or even binary numbers. Despite a possible degradation in predictive performance, quantization provides a potential solution to greatly reduce the model size and the energy consumption. In this survey, we give a thorough review of different aspects of quantized neural networks. Current challenges and trends of quantized neural networks are also discussed.) <|cite_end|>. In quanitized neural networks, the weights <|cite_start|> (Reference: Fixed point optimization of deep convolutional neural networks for object recognition: Deep convolutional neural networks have shown promising results in image and speech recognition applications. The learning capability of the network improves with increasing depth and size of each layer. However this capability comes at the cost of increased computational complexity. Thus reduction in hardware complexity and faster classification are highly desired. This work proposes an optimization method for fixed point deep convolutional neural networks. The parameters of a pre-trained high precision network are first directly quantized using L2 error minimization. We quantize each layer one by one, while other layers keep computation with high precision, to know the layer-wise sensitivity on word-length reduction. Then the network is retrained with quantized weights. Two examples on object recognition, MNIST and CIFAR-10, are presented. Our results indicate that quantization induces sparsity in the network which reduces the effective number of network parameters and improves generalization. This work reduces the required memory storage by a factor of 1/10 and achieves better classification results than the high precision networks.) <|cite_end|> <|cite_start|> (Reference: Bitwise Neural Networks: Based on the assumption that there exists a neural network that efficiently represents a set of Boolean functions between all binary inputs and outputs, we propose a process for developing and deploying neural networks whose weight parameters, bias terms, input, and intermediate hidden layer output signals, are all binary-valued, and require only basic bit logic for the feedforward pass. The proposed Bitwise Neural Network (BNN) is especially suitable for resource-constrained environments, since it replaces either floating or fixed-point arithmetic with significantly more efficient bitwise operations. Hence, the BNN requires for less spatial complexity, less memory bandwidth, and less power consumption in hardware. In order to design such networks, we propose to add a few training schemes, such as weight compression and noisy backpropagation, which result in a bitwise network that performs almost as well as its corresponding real-valued network. We test the proposed network on the MNIST dataset, represented using binary features, and show that BNNs result in competitive performance while offering dramatic computational savings.) <|cite_end|> <|cite_start|> (Reference: Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights: This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.) <|cite_end|> <|cite_start|> (Reference: Fixed Point Quantization of Deep Convolutional Networks: In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we propose a quantizer design for fixed point implementation of DCNs. We formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers. Our experiments show that in comparison to equal bit-width settings, the fixed point DCNs with optimized bit width allocation offer >20% reduction in the model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78% error-rate on CIFAR-10 benchmark.) <|cite_end|> <|cite_start|> (Reference: Towards Accurate Binary Convolutional Neural Network: We introduce a novel scheme to train binary convolutional neural networks (CNNs) -- CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.) <|cite_end|>, activation functions <|cite_start|> (Reference: Improving the Speed of Neural Networks on CPUs: Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.) <|cite_end|> <|cite_start|> (Reference: BinaryConnect: Training Deep Neural Networks with binary weights during propagations: Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.) <|cite_end|> <|cite_start|> (Reference: XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks: We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.) <|cite_end|> <|cite_start|> (Reference: WRPN: Wide Reduced-Precision Networks: For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN - wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.) <|cite_end|> and/or gradients <|cite_start|> (Reference: {1-Bit Stochastic Gradient Descent and Its Application to Data-Parallel Distributed Training of Speech DNNs: We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-parallel deterministically distributed SGD by combining this finding with AdaGrad, automatic minibatch-size selection, double buffering, and model parallelism. Unexpectedly, quantization benefits AdaGrad, giving a small accuracy gain. For a typical Switchboard DNN with 46M parameters, we reach computation speeds of 27k frames per second (kfps) when using 2880 samples per minibatch, and 51kfps with 16k, on a server with 8 K20X GPUs. This corresponds to speed-ups over a single GPU of 3.6 and 6.3, respectively. 7 training passes over 309h of data complete in under 7h. A 160M-parameter model training processes 3300h of data in under 16h on 20 dual-GPU servers—a 10 times speed-up—albeit at a small accuracy loss.) <|cite_end|> <|cite_start|> (Reference: Scalable Methods for 8-bit Training of Neural Networks: Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range Batch-Normalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors' knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset.) <|cite_end|> <|cite_start|> (Reference: QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding: Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to excellent scalability properties of this algorithm, and to its efficiency in the context of training deep neural networks. A fundamental barrier for parallelizing large-scale SGD is the fact that the cost of communicating the gradient updates between nodes can be very large. Consequently, lossy compression heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always provably converge, and it is not clear whether they are optimal. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions. QSGD allows the user to trade off compression and convergence time: it can communicate a sublinear number of bits per iteration in the model dimension, and can achieve asymptotically optimal communication cost. We complement our theoretical results with empirical data, showing that QSGD can significantly reduce communication cost, while being competitive with standard uncompressed techniques on a variety of real tasks. In particular, experiments show that gradient quantization applied to training of deep neural networks for image classification and automated speech recognition can lead to significant reductions in communication cost, and end-to-end training time. For instance, on 16 GPUs, we are able to train a ResNet-152 network on ImageNet 1.8x faster to full accuracy. Of note, we show that there exist generic parameter settings under which all known network architectures preserve or slightly improve their full accuracy when using quantization.) <|cite_end|> <|cite_start|> (Reference: High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning: Edge machine learning involves the deployment of learning algorithms at the wireless network edge so as to leverage massive mobile data for enabling intelligent applications. The mainstream edge learning approach, federated learning, has been developed based on distributed gradient descent. Based on the approach, stochastic gradients are computed at edge devices and then transmitted to an edge server for updating a global AI model. Since each stochastic gradient is typically high-dimensional (with millions to billions of coefficients), communication overhead becomes a bottleneck for edge learning. To address this issue, we propose in this work a novel framework of hierarchical stochastic gradient quantization and study its effect on the learning performance. First, the framework features a practical hierarchical architecture for decomposing the stochastic gradient into its norm and normalized block gradients, and efficiently quantizes them using a uniform quantizer and a low-dimensional codebook on a Grassmann manifold, respectively. Subsequently, the quantized normalized block gradients are scaled and cascaded to yield the quantized normalized stochastic gradient using a so-called hinge vector designed under the criterion of minimum distortion. The hinge vector is also efficiently compressed using another low-dimensional Grassmannian quantizer. The other feature of the framework is a bit-allocation scheme for reducing the quantization error. The scheme determines the resolutions of the low-dimensional quantizers in the proposed framework. The framework is proved to guarantee model convergency by analyzing the convergence rate as a function of the quantization bits. Furthermore, by simulation, our design is shown to substantially reduce the communication overhead compared with the state-of-the-art signSGD scheme, while both achieve similar learning accuracies.) <|cite_end|> are quantized. When the gradients are quantized, both the training and inference of DNN are accelerated <|cite_start|> (Reference: Scalable Methods for 8-bit Training of Neural Networks: Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range Batch-Normalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors' knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset.) <|cite_end|> <|cite_start|> (Reference: Towards Unified INT8 Training for Convolutional Neural Network: Recently low-bit (e.g., 8-bit) network quantization has been extensively studied to accelerate the inference. Besides inference, low-bit training with quantized gradients can further bring more considerable acceleration, since the backward process is often computation-intensive. Unfortunately, the inappropriate quantization of backward propagation usually makes the training unstable and even crash. There lacks a successful unified low-bit training framework that can support diverse networks on various tasks. In this paper, we give an attempt to build a unified 8-bit (INT8) training framework for common convolutional neural networks from the aspects of both accuracy and speed. First, we empirically find the four distinctive characteristics of gradients, which provide us insightful clues for gradient quantization. Then, we theoretically give an in-depth analysis of the convergence bound and derive two principles for stable INT8 training. Finally, we propose two universal techniques, including Direction Sensitive Gradient Clipping that reduces the direction deviation of gradients and Deviation Counteractive Learning Rate Scaling that avoids illegal gradient update along the wrong direction. The experiments show that our unified solution promises accurate and efficient INT8 training for a variety of networks and tasks, including MobileNetV2, InceptionV3 and object detection that prior studies have never succeeded. Moreover, it enjoys a strong flexibility to run on off-the-shelf hardware, and reduces the training time by 22% on Pascal GPU without too much optimization effort. We believe that this pioneering study will help lead the community towards a fully unified INT8 training for convolutional neural networks.) <|cite_end|>. Gradient quantization can also save the communication cost when the DNNs are trained on distributed systems <|cite_start|> (Reference: A Survey on Methods and Theories of Quantized Neural Networks: Deep neural networks are the state-of-the-art methods for many real-world tasks, such as computer vision, natural language processing and speech recognition. For all its popularity, deep neural networks are also criticized for consuming a lot of memory and draining battery life of devices during training and inference. This makes it hard to deploy these models on mobile or embedded devices which have tight resource constraints. Quantization is recognized as one of the most effective approaches to satisfy the extreme memory requirements that deep neural network models demand. Instead of adopting 32-bit floating point format to represent weights, quantized representations store weights using more compact formats such as integers or even binary numbers. Despite a possible degradation in predictive performance, quantization provides a potential solution to greatly reduce the model size and the energy consumption. In this survey, we give a thorough review of different aspects of quantized neural networks. Current challenges and trends of quantized neural networks are also discussed.) <|cite_end|>.
The smoothed analysis <|cite_start|> (Reference: Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time: We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity.) <|cite_end|> is a widely-accepted analysis tool in mathematical programming, machine learning <|cite_start|> (Reference: Worst-case and smoothed analysis of k-means clustering with Bregman divergences: The k-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice. Most of the theoretical work is restricted to the case that squared Euclidean distances are used as similarity measure. In many applications, however, data is to be clustered with respect to other measures like, e.g., relative entropy, which is commonly used to cluster web pages. In this paper, we analyze the running-time of the k-means method for Bregman divergences, a very general class of similarity measures including squared Euclidean distances and relative entropy. We show that the exponential lower bound known for the Euclidean case carries over to almost every Bregman divergence. To narrow the gap between theory and practice, we also study k-means in the semi-random input model of smoothed analysis. For the case that n data points in ? d are perturbed by noise with standard deviation ?, we show that for almost arbitrary Bregman divergences the expected running-time is bounded by ${\rm poly}(n^{\sqrt k}, 1/\sigma)$ and k kd ·poly(n, 1/?).) <|cite_end|>, computational social choice <|cite_start|> (Reference: Proceedings of NeurIPS 2019 Workshop on Machine Learning for the Developing World: Challenges and Risks of ML4D: This is the proceedings of the 3rd ML4D workshop which was help in Vancouver, Canada on December 13, 2019 as part of the Neural Information Processing Systems conference.) <|cite_end|> <|cite_start|> (Reference: Towards reality: Smoothed analysis in computational social choice: Hemaspaandra [22] celebrated the quite close relationship between computational social choice and computational complexity as a two-way street from which both areas benefited in the past, and expressed his hope that the areas become best friends forever. Later on, Rothe [38] celebrated the prominent Borda voting rule and surveyed recent advances on the complexity of problems related to the three most fundamental models of tampering with elections— namely, via manipulation, control, and bribery—and even related to using Borda beyond voting: in fair division and coalition formation in hedonic games. But now the party is over: no more celebration! Instead, we present a common criticism regarding computational social choice persistently making use of worst-case complexity. To overcome this shortcoming, we propose our blue sky idea of applying to problems from computational social choice the method of smoothed analysis due to Spielman and Teng [43, 44] and also used by Bläser and Manthey [7], as some sort of a middle ground between the worst-case and the average-case analysis of algorithms.) <|cite_end|> <|cite_start|> (Reference: The Smoothed Likelihood of Doctrinal Paradoxes: When aggregating logically interconnected judgments from $n$ agents, the result might be inconsistent with the logical connection. This inconsistency is known as the doctrinal paradox, which plays a central role in the field of judgment aggregation. Despite a large body of literature on the worst-case analysis of the doctrinal paradox, little is known about its likelihood under natural statistical models, except for a few i.i.d. distributions [List, 2005]. In this paper, we characterize the likelihood of the doctrinal paradox under a much more general and realistic model called the smoothed social choice framework [Xia, 2020b], where agents' ground truth judgments are arbitrarily correlated while the noises are independent. Our main theorem states that under mild conditions, the smoothed likelihood of the doctrinal paradox is either $0$, $\exp(-\Theta(n))$, $\Theta(n^{-1/2})$ or $\Theta(1)$. This not only answers open questions by List [2005] for i.i.d. distributions but also draws clear lines between situations with frequent and with vanishing paradoxes.) <|cite_end|>, and other topics <|cite_start|> (Reference: Smoothed Analysis of Belief Propagation for Minimum-Cost Flow and Matching: Belief propagation (BP) is a message-passing heuristic for statistical inference in graphical models such as Bayesian networks and Markov random fields. BP is used to compute marginal distributions or maximum likelihood assignments and has applications in many areas, including machine learning, image processing, and computer vision. However, the theoretical understanding of the performance of BP is unsatisfactory. Recently, BP has been applied to combinatorial optimization problems. It has been proved that BP can be used to compute maximum-weight matchings and minimum-cost flows for instances with a unique optimum. The number of iterations needed for this is pseudo-polynomial and hence BP is not efficient in general. We study belief propagation in the framework of smoothed analysis and prove that with high probability the number of iterations needed to compute maximum-weight matchings and minimum-cost flows is bounded by a polynomial if the weights/costs of the edges are randomly perturbed. To prove our upper bounds, we use an isolation lemma by Beier and V\"{o}cking (SIAM J. Comput. 2006) for matching and generalize an isolation lemma for min-cost flow by Gamarnik, Shah, and Wei (Operations Research, 2012). We also prove almost matching lower tail bounds for the number of iterations that BP needs to converge.) <|cite_end|> <|cite_start|> (Reference: Smoothed Analysis of Tensor Decompositions: Low rank tensor decompositions are a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error -- a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, our main contribution is in analyzing their stability in the framework of smoothed analysis. Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods to learning problems in the smoothed setting. In particular, we use it to obtain results for learning multi-view models and mixtures of axis-aligned Gaussians where there are many more "components" than dimensions. The assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.) <|cite_end|> <|cite_start|> (Reference: Smoothed Analysis of the Perceptron Algorithm for Linear Programming: The smoothed complexity [1] of an algorithm is the expected running time of the algorithm on an arbitrary instance under a random perturbation. It was shown recently that the simplex algorithm has polynomial smoothed complexity. We show that a simple greedy algorithm for linear programming, the perceptron algorithm, also has polynomial smoothed complexity, in a high probability sense; that is, the running time is polynomial with high probability over the random perturbation.) <|cite_end|>. Lastly, we note that smoothed DP is very different from the smooth sensitivity framework <|cite_start|> (Reference: Smooth sensitivity and sampling in private data analysis: We introduce a new, generic framework for private data analysis. The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains. Our framework allows one to release functions f of the data with instance-specific additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also by the database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smooth sensitivity of f on the database x — a measure of variability of f in the neighborhood of the instance x. The new framework greatly expands the applicability of output perturbation, a technique for protecting individuals’ privacy by adding a small amount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-specific noise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely, to apply the framework one must compute or approximate the smooth sensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost of the minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on many databases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known or when f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians.) <|cite_end|>. The latter is an algorithmic tool that smooths the changes of the noise-level across neighboring datasets (and achieve the standard DP), while we use smoothing as a theoretical tool to analyze the intrinsic privacy properties of non-randomized algorithms in practice. <|paper_end|> | [
"<|reference_start|> Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online. <|reference_end|>",
"<|reference_start|> BinaryConnect: Training Deep Neural Networks with binary weights during propagations: Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN. <|reference_end|>",
"<|reference_start|> Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time: We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity. <|reference_end|>",
"<|reference_start|> Proceedings of NeurIPS 2019 Workshop on Machine Learning for the Developing World: Challenges and Risks of ML4D: This is the proceedings of the 3rd ML4D workshop which was help in Vancouver, Canada on December 13, 2019 as part of the Neural Information Processing Systems conference. <|reference_end|>"
] | [
21,
29,
39,
41
] | {"<|cite_1|>": "ss-767290", "<|cite_2|>": "arxiv-158505", "<|cite_5|>": "arxiv-206558", "<|cite_6|>": "ss-1570749", "<|cite_7|>": "ss-773593", "<|multi_cite_8_1|>": "arxiv-160324", "<|multi_cite_8_2|>": "arxiv-106381", "<|cite_9|>": "ss-942779", "<|cite_10|>": "ss-781079", "<|cite_11|>": "arxiv-203022", "<|multi_cite_12_1|>": "arxiv-97459", "<|multi_cite_12_2|>": "arxiv-93493", "<|cite_13|>": "arxiv-189159", "<|cite_28|>": "arxiv-208448", "<|cite_14|>": "arxiv-208448", "<|cite_15|>": "arxiv-189159", "<|multi_cite_16_1|>": "ss-1777911", "<|multi_cite_16_2|>": "ss-942725", "<|multi_cite_16_3|>": "ss-942724", "<|multi_cite_16_4|>": "ss-2542719", "<|multi_cite_17_1|>": "arxiv-91785", "<|multi_cite_17_2|>": "arxiv-106381", "<|multi_cite_17_3|>": "arxiv-169203", "<|multi_cite_18_1|>": "ss-1454325", "<|multi_cite_18_2|>": "arxiv-90833", "<|multi_cite_18_3|>": "arxiv-116271", "<|multi_cite_18_4|>": "arxiv-87623", "<|multi_cite_18_5|>": "arxiv-141763", "<|multi_cite_19_1|>": "ss-1713975", "<|multi_cite_19_2|>": "arxiv-86395", "<|multi_cite_19_3|>": "arxiv-94105", "<|multi_cite_19_4|>": "arxiv-133598", "<|multi_cite_20_1|>": "ss-708784", "<|multi_cite_20_2|>": "arxiv-160324", "<|multi_cite_20_3|>": "arxiv-107402", "<|multi_cite_20_4|>": "arxiv-227878", "<|multi_cite_21_1|>": "arxiv-160324", "<|multi_cite_21_2|>": "arxiv-241271", "<|cite_22|>": "arxiv-169203", "<|cite_23|>": "arxiv-670299", "<|multi_cite_24_2|>": "ss-910769", "<|multi_cite_25_1|>": "ss-1194853", "<|multi_cite_25_2|>": "ss-1238705", "<|multi_cite_25_4|>": "ss-910770", "<|multi_cite_26_1|>": "arxiv-38182", "<|multi_cite_26_2|>": "arxiv-52747", "<|multi_cite_26_3|>": "ss-2159577", "<|cite_27|>": "ss-1265502"} |
2103.13678 | <|paper_start|> Title: Pruning-then-Expanding Model for Domain Adaptation of Neural Machine Translation
Abstract: Pruning-then-Expanding Model for Domain Adaptation of Neural Machine Translation: Domain Adaptation is widely used in practical applications of neural machine translation, which aims to achieve good performance on both the general-domain and in-domain. However, the existing methods for domain adaptation usually suffer from catastrophic forgetting, domain divergence, and model explosion. To address these three problems, we propose a method of "divide and conquer" which is based on the importance of neurons or parameters in the translation model. In our method, we first prune the model and only keep the important neurons or parameters, making them responsible for both general-domain and in-domain translation. Then we further train the pruned model supervised by the original unpruned model with the knowledge distillation method. Last we expand the model to the original size and fine-tune the added parameters for the in-domain translation. We conduct experiments on different languages and domains and the results show that our method can achieve significant improvements compared with several strong baselines.
Introduction
Neural machine translation (NMT) models <|cite_start|> (Reference: Recurrent continuous translation models: We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43% lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.) <|cite_end|> <|cite_start|> (Reference: Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation: In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.) <|cite_end|> <|cite_start|> (Reference: Sequence to Sequence Learning with Neural Networks: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.) <|cite_end|> <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|> <|cite_start|> (Reference: Convolutional Sequence to Sequence Learning: The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.) <|cite_end|> <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> are data-driven and hence require large-scale training data to achieve good performance <|cite_start|> (Reference: Bridging the Gap between Training and Inference for Neural Machine Translation: Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese->English and WMT'14 English->German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.) <|cite_end|>. In practical applications, NMT models usually need to produce translation for some specific domains with only a small quantity of in-domain data available, so domain adaptation is applied to address the problem. A typical domain adaptation scenario as discussed in <|cite_start|> (Reference: Fast Domain Adaptation for Neural Machine Translation: Neural Machine Translation (NMT) is a new approach for automatic translation of text from one human language into another. The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is gaining popularity in the research community because it outperformed traditional SMT approaches in several translation tasks at WMT and other evaluation tasks/benchmarks at least for some language pairs. However, many of the enhancements in SMT over the years have not been incorporated into the NMT framework. In this paper, we focus on one such enhancement namely domain adaptation. We propose an approach for adapting a NMT system to a new domain. The main idea behind domain adaptation is that the availability of large out-of-domain training data and a small in-domain training data. We report significant gains with our proposed method in both automatic metrics and a human subjective evaluation metric on two language pairs. With our adaptation method, we show large improvement on the new domain while the performance of our general domain only degrades slightly. In addition, our approach is fast enough to adapt an already trained system to a new domain within few hours without the need to retrain the NMT model on the combined data which usually takes several days/weeks depending on the volume of the data.) <|cite_end|> is that an NMT model have been trained with large-scale general-domain data and then is adapted to specific domains, hoping the model can fit in-domain data well meanwhile the performance will not degrade too much on the general domain.
Towards this end, many researchers have made their attempts. The fine-tuning method <|cite_start|> (Reference: Stanford neural machine translation systems for spoken language
domains: Neural Machine Translation (NMT), though recently developed, has shown promising results for various language pairs. Despite that, NMT has only been applied to mostly formal texts such as those in the WMT shared tasks. This work further explores the effectiveness of NMT in spoken language domains by participating in the MT track of the IWSLT 2015. We consider two scenarios: (a) how to adapt existing NMT systems to a new domain and (b) the generalization of NMT to low-resource language pairs. Our results demonstrate that using an existing NMT framework1, we can achieve competitive results in the aforementioned scenarios when translating from English to German and Vietnamese. Notably, we have advanced state-of-the-art results in the IWSLT EnglishGerman MT track by up to 5.2 BLEU points.) <|cite_end|> performs in-domain training based on the general-domain model by first training the model on general-domain data and then continuing to train on in-domain data. Despite its convenience for use and high-quality for in-domain translation, this method suffers from catastrophic forgetting which leads to poor performance in the previous domains.
Regularization-based methods <|cite_start|> (Reference: Fine-tuning for neural machine translation with limited degradation across in-and out-of-domain data: Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Similar to other neural network based methods, NMT also suffers from low performance for the domains with less available training data. Domain adaptation deals with improving performance of a model trained on large general domain data over test instances from a new domain. Fine-tuning is a fast and simple domain adaptation method which has demonstrated substantial improvements for various neural network based tasks including NMT. However, it suffers from drastic performance degradation on the general or source domain test sentences, which is undesirable in real-time applications. To address this problem of drastic degradation, in this paper, we propose two simple modifications to the fine-tuning approach, namely multi-objective learning and multi-output learning which are based on the “Knowledge distillation” framework. Experiments on English-German translations demonstrate that our approaches achieve results comparable to simple fine-tuning on the target domain task with comparatively little loss on the general domain task.) <|cite_end|> <|cite_start|> (Reference: Overcoming Catastrophic Forgetting During Domain Adaptation of Neural
Machine Translation: Continued training is an effective method for domain adaptation in neural machine translation. However, in-domain gains from adaptation come at the expense of general-domain performance. In this work, we interpret the drop in general-domain performance as catastrophic forgetting of general-domain knowledge. To mitigate it, we adapt Elastic Weight Consolidation (EWC)—a machine learning method for learning a new task without forgetting previous tasks. Our method retains the majority of general-domain performance lost in continued training without degrading in-domain performance, outperforming the previous state-of-the-art. We also explore the full range of general-domain performance available when some in-domain degradation is acceptable.) <|cite_end|> <|cite_start|> (Reference: Regularization techniques for fine-tuning in neural machine translation: We investigate techniques for supervised domain adaptation for neural machine translation where an existing model trained on a large out-of-domain dataset is adapted to a small in-domain dataset. In this scenario, overfitting is a major challenge. We investigate a number of techniques to reduce overfitting and improve transfer learning, including regularization techniques such as dropout and L2-regularization towards an out-of-domain prior. In addition, we introduce tuneout, a novel regularization technique inspired by dropout. We apply these techniques, alone and in combination, to neural machine translation, obtaining improvements on IWSLT datasets for English->German and English->Russian. We also investigate the amounts of in-domain training data needed for domain adaptation in NMT, and find a logarithmic relationship between the amount of training data and gain in BLEU score.) <|cite_end|> <|cite_start|> (Reference: Regularized Training Objective for Continued Training for Domain Adaptation
in Neural Machine Translation: Supervised domain adaptation—where a large generic corpus and a smaller in-domain corpus are both available for training—is a challenge for neural machine translation (NMT). Standard practice is to train a generic model and use it to initialize a second model, then continue training the second model on in-domain data to produce an in-domain model. We add an auxiliary term to the training objective during continued training that minimizes the cross entropy between the in-domain model’s output word distribution and that of the out-of-domain model to prevent the model’s output from differing too much from the original out-of-domain model. We perform experiments on EMEA (descriptions of medicines) and TED (rehearsed presentations), initialized from a general domain (WMT) model. Our method shows improvements over standard continued training by up to 1.5 BLEU.) <|cite_end|> instead introduce an additional loss to the original objective so that the translation model can trade off between general-domain and in-domain. This kind of methods usually has all the parameters shared by general-domain and in-domain, with the assumption that the optimal parameter spaces for all the domains will overlap with each other, and retaining these overlapped parameters can balance over all the domains. This assumption is feasible when the domains are similar, but when the divergence of the domains is large, it is not reasonable anymore. In contrast, the methods with domain-specific networks <|cite_start|> (Reference: Fine-tuning for neural machine translation with limited degradation across in-and out-of-domain data: Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Similar to other neural network based methods, NMT also suffers from low performance for the domains with less available training data. Domain adaptation deals with improving performance of a model trained on large general domain data over test instances from a new domain. Fine-tuning is a fast and simple domain adaptation method which has demonstrated substantial improvements for various neural network based tasks including NMT. However, it suffers from drastic performance degradation on the general or source domain test sentences, which is undesirable in real-time applications. To address this problem of drastic degradation, in this paper, we propose two simple modifications to the fine-tuning approach, namely multi-objective learning and multi-output learning which are based on the “Knowledge distillation” framework. Experiments on English-German translations demonstrate that our approaches achieve results comparable to simple fine-tuning on the target domain task with comparatively little loss on the general domain task.) <|cite_end|> <|cite_start|> (Reference: Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks: The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model. Previous work shows that the standard neural machine translation (NMT) model, trained on mixed-domain data, generally captures the general knowledge, but misses the domain-specific knowledge. In response to this problem, we augment NMT model with additional domain transformation networks to transform the general representations to domain-specific representations, which are subsequently fed to the NMT decoder. To guarantee the knowledge transformation, we also propose two complementary supervision signals by leveraging the power of knowledge distillation and adversarial learning. Experimental results on several language pairs, covering both balanced and unbalanced multi-domain translation, demonstrate the effectiveness and universality of the proposed approach. Encouragingly, the proposed unified model achieves comparable results with the fine-tuning approach that requires multiple models to preserve the particular knowledge. Further analyses reveal that the domain transformation networks successfully capture the domain-specific knowledge as expected.) <|cite_end|> <|cite_start|> (Reference: Simple, Scalable Adaptation for Neural Machine Translation: Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. We propose a simple yet efficient approach for adaptation in NMT. Our proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously. We evaluate our approach on two tasks: (i) Domain Adaptation and (ii) Massively Multilingual NMT. Experiments on domain adaptation demonstrate that our proposed approach is on par with full fine-tuning on various domains, dataset sizes and model capacities. On a massively multilingual dataset of 103 languages, our adaptation approach bridges the gap between individual bilingual models and one massively multilingual model for most language pairs, paving the way towards universal machine translation.) <|cite_end|> <|cite_start|> (Reference: Improving Domain Adaptation Translation with Domain Invariant and Specific Information: In domain adaptation for neural machine translation, translation performance can benefit from separating features into domain-specific features and common features. In this paper, we propose a method to explicitly model the two kinds of information in the encoder-decoder framework so as to exploit out-of-domain data in in-domain training. In our method, we maintain a private encoder and a private decoder for each domain which are used to model domain-specific information. In the meantime, we introduce a common encoder and a common decoder shared by all the domains which can only have domain-independent information flow through. Besides, we add a discriminator to the shared encoder and employ adversarial training for the whole model to reinforce the performance of information separation and machine translation simultaneously. Experiment results show that our method can outperform competitive baselines greatly on multiple data sets.) <|cite_end|> can be often (but not always) immune to domain divergence as it can capture domain-specific features. But unfortunately, as the number of domains increases, the parameters of this kind of methods will surge. Besides, the structure of these networks needs to be carefully designed and tuned, which prevents them from being used in many cases.
Given the above, we propose a method of domain adaptation that can not only deal with large domain divergence during domain transferring but also keep a stable model size even with multiple domains. Inspired by the analysis work on NMT <|cite_start|> (Reference: Identifying and Controlling Important Neurons in Neural Machine Translation: Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.) <|cite_end|> <|cite_start|> (Reference: Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned: Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads in the encoder to the overall performance of the model and analyze the roles played by them. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.) <|cite_end|> <|cite_start|> (Reference: Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation: Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many methods have been proposed to solve this problem, we cannot get to know what causes this phenomenon yet. Under the background of domain adaptation, we investigate the cause of catastrophic forgetting from the perspectives of modules and parameters (neurons). The investigation on the modules of the NMT model shows that some modules have tight relation with the general-domain knowledge while some other modules are more essential in the domain adaptation. And the investigation on the parameters shows that some parameters are important for both the general-domain and in-domain translation and the great change of them during continual training brings about the performance decline in general-domain. We conduct experiments across different language pairs and domains to ensure the validity and reliability of our findings.) <|cite_end|>, we find that only some important parameters in a well-trained NMT model play an important role when generating the translation and unimportant parameters can be erased without affecting the translation quality too much.
According to these findings, we can preserve important parameters for general-domain translation, while tuning unimportant parameters for in-domain translation.
To achieve this, we first train a model on the general domain and then shrink the model with neuron pruning or weight pruning methods, only retaining the important neurons/parameters. To ensure the model can still perform well on general-domain data,
we adjust the model on in-domain data with knowledge distillation where the original whole model is used as the teacher and the pruned model as the student. Finally, we expand the model to the original size and fine-tune the added parameters on the in-domain data.
Experimental results on different languages and domains show that our method can avoid catastrophic forgetting on general-domain data and achieve significant improvements over strong baselines on multiple in-domain data sets.
Our contributions can be summarized as follows:
\begin{itemize}
\item We prove that the parameters that are unimportant for general-domain data can be utilized to improve in-domain translation quality.
\item Our model can keep superior performance over baselines even when continually transferring to multiple domains.
\item Our model can fit in the continual learning scenario where the data for the previous domains cannot be got anymore which is the common situation in practice.
\end{itemize}
Related Work
\subsection{The Transformer}
In our work, we apply our method in the framework of \textsc{Transformer} <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> which will be briefly introduced here.
However, we note that our method can also be combined with other NMT architectures.
We denote the input sequence of symbols as $\mathbf{x}=(x_1,\ldots,x_J)$, the ground-truth sequence as $\mathbf{y}^{*}=(y_1^{*},\ldots,y_{K*}^{*})$ and the translation as $\mathbf{y}=(y_1,\ldots,y_K)$.
\noindent \textbf{The Encoder \& Decoder} The encoder is composed of $\mathnormal{N}$ identical layers. Each layer has two sublayers. The first is a multi-head self-attention sublayer and the second is a fully connected feed-forward network. Both of the sublayers are followed by a residual connection operation and a layer normalization operation. The input sequence $\mathbf{x}$ will be first converted to a sequence of vectors $\mathbf{E}_x=[E_x[x_1];\ldots;E_x[x_J]]$ where $E_x[x_j]$ is the sum of word embedding and position embedding of the source word $x_j$.
Then, this sequence of vectors will be fed into the encoder and the output of the $\mathnormal{N}$-th layer will be taken as source hidden states. and we denote it as $\mathbf{H}$.
The decoder is also composed of $\mathnormal{N}$ identical layers. In addition to the same kind of two sublayers in each encoder layer, the cross-attention sublayer is inserted between them, which performs multi-head attention over the output of the encoder. The final output of the $\mathnormal{N}$-th layer gives the target hidden states $\mathbf{S}=[\mathbf{s}_1;\ldots;\mathbf{s}_{K*}]$, where $\mathbf{s}_k$ is the hidden states of $y_k$.
\noindent \textbf{The Objective}
We can get the predicted probability of the $k$-th target word over the target vocabulary by performing a linear transformation and a softmax operation to the target hidden states:
\begin{equation}
p(y_k | \mathbf{y}_{<k}, \mathbf{x}) \propto \exp({\mathbf W}_o {\mathbf s}_k + \mathbf{b}_o),
\end{equation}
where ${\mathbf W}_o \in \mathbb{R}^{d_{model}\times|\mathrm{V}_t|}$ and $|\mathrm{V}_t|$ are the size of target vocabulary.
The model is optimized by minimizing a cross-entropy loss of the ground-truth sequence with teacher forcing training:
\begin{equation}\label{eq::loss}
\mathcal{L}(\theta) = -\frac{1}{K} \sum_{k=1}^{K} \log p(y_k^{*} | \mathbf{y}_{<k}, \mathbf{x}; \theta),
\end{equation}
where $K$ is the length of the target sentence and $\theta$ denotes the model parameters.
\subsection{Knowledge Distillation}
Knowledge Distillation (KD) method <|cite_start|> (Reference: Distilling the Knowledge in a Neural Network: A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.) <|cite_end|> is for distilling knowledge from a teacher network to a student network. Normally, the teacher network is considered to be with higher capability.
A smaller student network can be trained to perform comparablely or even better by mimicking the output distribution of the teacher network on the same data. This is usually done by minimizing the cross entropy between the two distributions:
\begin{equation}
\begin{split}
\mathcal{L}_{\mathrm{KD}}(\theta, \theta_T) = -\frac{1}{K} & \sum_{k=1}^{K} q(\mathbf{y}_k | \mathbf{y}_{<k}, \mathbf{x}; \theta_T) \\
& \times \log p(\mathbf{y}_k| \mathbf{y}_{<k}, \mathbf{x}; \theta),
\end{split}
\end{equation}
where $q$ denotes the output distribution of the teacher network and $\theta$ and $\theta_T$ denote the parameters of the student and teacher network, respectively.
The parameters of the teacher network usually keep fixed during the KD process.
\begin{figure*}[t!]
\centering
\includegraphics[width=2.0\columnwidth]{method.png}
\caption{The whole training process of the proposed method.}
\label{fig:method}
\end{figure*} <|paper_end|> | [
"<|reference_start|> Recurrent continuous translation models: We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43% lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations. <|reference_end|>",
"<|reference_start|> Convolutional Sequence to Sequence Learning: The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU. <|reference_end|>",
"<|reference_start|> Fine-tuning for neural machine translation with limited degradation across in-and out-of-domain data: Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Similar to other neural network based methods, NMT also suffers from low performance for the domains with less available training data. Domain adaptation deals with improving performance of a model trained on large general domain data over test instances from a new domain. Fine-tuning is a fast and simple domain adaptation method which has demonstrated substantial improvements for various neural network based tasks including NMT. However, it suffers from drastic performance degradation on the general or source domain test sentences, which is undesirable in real-time applications. To address this problem of drastic degradation, in this paper, we propose two simple modifications to the fine-tuning approach, namely multi-objective learning and multi-output learning which are based on the “Knowledge distillation” framework. Experiments on English-German translations demonstrate that our approaches achieve results comparable to simple fine-tuning on the target domain task with comparatively little loss on the general domain task. <|reference_end|>",
"<|reference_start|> Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks: The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model. Previous work shows that the standard neural machine translation (NMT) model, trained on mixed-domain data, generally captures the general knowledge, but misses the domain-specific knowledge. In response to this problem, we augment NMT model with additional domain transformation networks to transform the general representations to domain-specific representations, which are subsequently fed to the NMT decoder. To guarantee the knowledge transformation, we also propose two complementary supervision signals by leveraging the power of knowledge distillation and adversarial learning. Experimental results on several language pairs, covering both balanced and unbalanced multi-domain translation, demonstrate the effectiveness and universality of the proposed approach. Encouragingly, the proposed unified model achieves comparable results with the fine-tuning approach that requires multiple models to preserve the particular knowledge. Further analyses reveal that the domain transformation networks successfully capture the domain-specific knowledge as expected. <|reference_end|>"
] | [
0,
4,
13,
14
] | {"<|multi_cite_1_1|>": "ss-996925", "<|multi_cite_1_2|>": "ss-1035297", "<|multi_cite_1_3|>": "arxiv-65933", "<|multi_cite_1_4|>": "arxiv-65503", "<|multi_cite_1_5|>": "arxiv-123607", "<|multi_cite_1_6|>": "arxiv-126595", "<|cite_2|>": "arxiv-208301", "<|cite_9|>": "arxiv-113026", "<|cite_3|>": "ss-1101324", "<|multi_cite_4_1|>": "ss-930141", "<|multi_cite_4_2|>": "ss-1285944", "<|multi_cite_4_3|>": "arxiv-130794", "<|multi_cite_4_4|>": "ss-1285945", "<|multi_cite_5_1|>": "ss-930141", "<|multi_cite_5_2|>": "arxiv-235740", "<|multi_cite_5_3|>": "arxiv-224422", "<|multi_cite_5_4|>": "arxiv-198811", "<|multi_cite_6_1|>": "arxiv-178820", "<|multi_cite_6_2|>": "arxiv-205420", "<|multi_cite_6_3|>": "arxiv-300768", "<|cite_7|>": "arxiv-126595", "<|cite_8|>": "arxiv-74282"} |
2312.04427 | <|paper_start|> Title: Spheroidal Molecular Communication via Diffusion: Signaling Between Homogeneous Cell Aggregates
Abstract: Spheroidal Molecular Communication via Diffusion: Signaling Between Homogeneous Cell Aggregates: Recent molecular communication (MC) research has integrated more detailed computational models to capture the dynamics of practical biophysical systems. This research focuses on developing realistic models for MC transceivers inspired by spheroids - three-dimensional cell aggregates commonly used in organ-on-chip experimental systems. Potential applications that can be used or modeled with spheroids include nutrient transport in an organ-on-chip system, the release of biomarkers or reception of drug molecules by a cancerous tumor site, or transceiver nanomachines participating in information exchange. In this paper, a simple diffusive MC system is considered where a spheroidal transmitter and receiver are in an unbounded fluid environment. These spheroidal antennas are modeled as porous media for diffusive signaling molecules, then their boundary conditions and effective diffusion coefficients are characterized. Further, for either a point source or spheroidal transmitter, Green's function for concentration (GFC) outside and inside the receiving spheroid is analytically derived and formulated in terms of an infinite series and confirmed by a particle-based simulator (PBS). The provided GFCs enable computation of the transmitted and received signals in the spheroidal communication system. This study shows that the porous structure of the receiving spheroid amplifies diffusion signals but also disperses them, thus there is a trade-off between porosity and information transmission rate. Also, the results reveal that the porous arrangement of the transmitting spheroid not only disperses the received signal but also attenuates it. System performance is also evaluated in terms of bit error rate (BER). Decreasing the porosity of the receiving spheroid is shown to enhance system performance. Conversely, reducing the porosity of the transmitting spheroid can adversely affect system performance.
Introduction
Molecular communication (MC) is a bio-inspired mechanism that is envisioned to realize micro- and nano-scale communication systems using molecules as information carriers <|cite_start|> (Reference: A Comprehensive Survey of Recent Advancements in Molecular Communication: With much advancement in the field of nanotechnology, bioengineering and synthetic biology over the past decade, microscales and nanoscales devices are becoming a reality. Yet the problem of engineering a reliable communication system between tiny devices is still an open problem. At the same time, despite the prevalence of radio communication, there are still areas where traditional electromagnetic waves find it difficult or expensive to reach. Points of interest in industry, cities, and medical applications often lie in embedded and entrenched areas, accessible only by ventricles at scales too small for conventional radio waves and microwaves, or they are located in such a way that directional high frequency systems are ineffective. Inspired by nature, one solution to these problems is molecular communication (MC), where chemical signals are used to transfer information. Although biologists have studied MC for decades, it has only been researched for roughly 10 year from a communication engineering lens. Significant number of papers have been published to date, but owing to the need for interdisciplinary work, much of the results are preliminary. In this paper, the recent advancements in the field of MC engineering are highlighted. First, the biological, chemical, and physical processes used by an MC system are discussed. This includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms. Then, a comprehensive survey of some of the recent works on MC through a communication engineering lens is provided. The paper ends with a technology readiness analysis of MC and future research directions.) <|cite_end|>.
Despite many efforts by the MC community to model the components of MC systems, more realistic models are required. Existing literature over-simplifies the MC components in a biological environment and does not sufficiently account for the interactions of signaling molecules with the environment and the biological or biosynthetic transmitters and receivers (i.e., cells). Elements and structures of \textit{in vitro} environments such as organs-on-chip could be used to improve model realism, to potentially contribute to MC research and development, and to provide mechanistic insight into the biology of the organs (on the chip). For example, the transmitter could be a group of beta-cells emitting insulin, and the receiver could be a group of liver cells that detect the insulin signal and react by increasing its uptake of glucose. In the following, we review models for MC transmitters and receivers.
\textit{Transmitters:} The most common transmitter model is to represent it as an ideal point source that releases molecules instantaneously, disregarding physical geometry and realistic release mechanisms <|cite_start|> (Reference: Channel modeling for diffusive molecular communication—a tutorial review: Molecular communication (MC) is a new communication engineering paradigm where molecules are employed as information carriers. MC systems are expected to enable new revolutionary applications, such as sensing of target substances in biotechnology, smart drug delivery in medicine, and monitoring of oil pipelines or chemical reactors in industrial settings. As for any other kind of communication, simple yet sufficiently accurate channel models are needed for the design, analysis, and efficient operation of MC systems. In this paper, we provide a tutorial review on mathematical channel modeling for diffusive MC systems. The considered end-to-end MC channel models incorporate the effects of the release mechanism, the MC environment, and the reception mechanism on the observed information molecules. Thereby, the various existing models for the different components of an MC system are presented under a common framework and the underlying biological, chemical, and physical phenomena are discussed. Deterministic models characterizing the expected number of molecules observed at the receiver and statistical models characterizing the actual number of observed molecules are developed. In addition, we provide the channel models for time-varying MC systems with moving transmitters and receivers, which are relevant for advanced applications such as smart drug delivery with mobile nanomachines. For complex scenarios, where simple MC channel models cannot be obtained from first principles, we investigate the simulation- and experiment-driven channel models. Finally, we provide a detailed discussion of potential challenges, open research problems, and future directions in channel modeling for diffusive MC systems.) <|cite_end|>. Paper <|cite_start|> (Reference: Evaluation of EM absorption loss over breast mass for breast cancer diagnosis: This paper presents electromagnetic (EM) absorption loss over breast mass as a new approach in the detection of breast cancer tumors. A linear curve of absorption loss over breast mass is used to establish acceptable normal absorption loss values. Since tumor-infected breast tissues should have higher absorption loss than normal breast tissues, the measured absorption loss of a tumor-infected breast will be higher than the established normal absorption loss value, and the breast will be diagnosed as infected. EM simulations of normal and infected breast tissues are run at 915, 2450, and 4000 MHz. Results show that 915 MHz presents the best linear curve fit and resolution. Also, the absorption loss for an infected breast, at 915 MHz, is higher than the absorption loss for a normal breast and is least affected by tumor location.) <|cite_end|> pioneered the concept of a pulse-shaped release from a point source transmitter, in which multiple molecules can be released during the pulse. Alternatively, the volume transmitter model proposed in <|cite_start|> (Reference: Channel Impulse Responses in Diffusive Molecular Communication with Spherical Transmitters: Molecular communication is an emerging paradigm for systems that rely on the release of molecules as information carriers. Communication via molecular diffusion is a popular strategy that is ubiquitous in nature and very fast over distances on the order of a micron or less. Existing closed-form analysis of the diffusion channel impulse response generally assumes that the transmitter is a point source. In this paper, channel impulse responses are derived for spherical transmitters with either a passive or absorbing receiver. The derived channel impulse responses are in closed-form for a one-dimensional environment and can be found via numerical integration for a three-dimensional environment. The point transmitter assumption (PTA) is formally defined so that its accuracy can be measured in comparison to the derived spherical transmitter impulse responses. The spherical transmitter model is much more accurate than the PTA when the distance between a transmitter and its receiver is small relative to the size of the transmitter. The derived results are verified via microscopic particle-based simulations using the molecular communication simulation platform AcCoRD (Actor-based Communication via Reaction-Diffusion). A spherical transmitter variation where molecules are released from the surface of a solid sphere is also considered via simulation.) <|cite_end|> has incorporated geometric properties of the transmitter and molecules, although the model did not consider a distinct transmitter boundary, leading to an equal concentration inside and outside the transmitter's boundary. Paper <|cite_start|> (Reference: A Physical End-to-End Model for Molecular Communication in Nanonetworks: Molecular communication is a promising paradigm for nanoscale networks. The end-to-end (including the channel) models developed for classical wireless communication networks need to undergo a profound revision so that they can be applied for nanonetworks. Consequently, there is a need to develop new end-to-end (including the channel) models which can give new insights into the design of these nanoscale networks. The objective of this paper is to introduce a new physical end-to-end (including the channel) model for molecular communication. The new model is investigated by means of three modules, i.e., the transmitter, the signal propagation and the receiver. Each module is related to a specific process involving particle exchanges, namely, particle emission, particle diffusion and particle reception. The particle emission process involves the increase or decrease of the particle concentration rate in the environment according to a modulating input signal. The particle diffusion provides the propagation of particles from the transmitter to the receiver by means of the physics laws underlying particle diffusion in the space. The particle reception process is identified by the sensing of the particle concentration value at the receiver location. Numerical results are provided for three modules, as well as for the overall end-to-end model, in terms of normalized gain and delay as functions of the input frequency and of the transmission range.) <|cite_end|> studied a box-like transmitter with a surface outlet for controlling molecular release to establish desired concentration gradients. Furthermore, <|cite_start|> (Reference: Diffusion-controlled interface kinetics-inclusive system-theoretic propagation models for molecular communication systems: ) <|cite_end|> assumed a spherical structure with nanopores for molecule passage. The model in <|cite_start|> (Reference: Ion Channel Based Bio-Synthetic Modulator for Diffusive Molecular Communication: In diffusion-based molecular communication (DMC), a transmitter nanomachine is responsible for signal modulation. Thereby, the transmitter has to be able to control the release of the signaling molecules employed for representing the transmitted information. In nature, an important class of control mechanisms for releasing molecules from cells utilizes ion channels which are pore-forming proteins across the cell membrane. The opening and closing of the ion channels is controlled by a gating parameter. In this paper, an ion channel based modulator for DMC is proposed which controls the rate of molecule release from the transmitter by modulating a gating parameter signal. Exploiting the capabilities of the proposed modulator, an on-off keying modulation technique is introduced and the corresponding average modulated signal, i.e., the average release rate of the molecules from the transmitter, is analyzed. However, since the modulated signal is random in nature, it may deviate from its average. Therefore, the concept of modulator noise is introduced and the statistics of the modulated signal are investigated. Finally, by assuming a simple transparent receiver, the performance of the proposed on-off keying modulation format is studied. The derived analytical expressions for the average modulated signal are confirmed with particle based simulations. Our numerical results reveal that performance estimates of DMC systems obtained based on the assumption of instantaneous molecule release at the transmitter may substantially deviate from the performance achieved with practical modulators.) <|cite_end|> incorporated ion channels on the surface of a cell transmitter sensitive to electrical voltage or ligand concentration variations for the release of molecules. Storage and production dynamics inside the transmitter have also been explored in the literature, ranging from instantaneous <|cite_start|> (Reference: Diffusion-based molecular communication with limited molecule production rate: This paper studies the impact of a transmitter’s molecule generation process on the capacity of a concentration-based molecular communication (MC) system. Constraints caused by the molecule generation process affect the availability of the molecules at the transmitter. The transmitter has a storage of molecules, and should decide whether to release or save the currently produced molecules. As a result, the MC system has conceptual connections with energy harvesting systems. In this paper, we consider two scenarios on the propagation channel. The first scenario assumes a channel with no inter-symbol interference (ISI), i.e., a memoryless channel. We derive bounds on the capacity of the MC system in this scenario. The second scenario assumes an MC channel with ISI, in which the output of the channel depends on the history of released molecules in the previous time-slots. Based on the assumptions that either the transmitter or the receiver knows the channel statistics, we compute a lower bound on the channel capacity.) <|cite_end|> to pulse function release <|cite_start|> (Reference: Adaptive Release Duration Modulation for Limited Molecule Production and Storage: The nature of molecular transmitter imposes some limitations on the molecule production process and its storage. As the molecules act the role of the information carriers, the limitations affect the transmission process and the system performance considerably. In this paper, we focus on the transmitter's limitations, in particular, the limited molecule production rate and the finite storage capacity. We consider a time-slotted communication where the transmitter opens its outlets and releases the stored molecules for a specific time duration to send bit "1" and remains silent to send bit "0". By changing the release duration, we propose an adaptive release duration modulation. The objective is to find the optimal transmission release duration to minimize the probability of error. We characterize the properties of the optimal release duration and use it to derive upper and lower bounds on the system performance. We see that the proposed modulation scheme improves the performance.) <|cite_end|> and (inspired by neuronal behavior) exponential release based on the number of molecules stored, as proposed in <|cite_start|> (Reference: Molecular communication transmitter design in limited-capacity storage regime: The limited storage capacity at the transmitters of a molecular communication (MC) system can affect the system’s performance. One of the reasons for this limitation is the size restriction of the transmitter, which the storage must be replenished so that the transmitter has enough molecules for future transmission. This paper proposes a biologically inspired transmitter model based on neurons for MC whose storage charging and discharging follow differential equations. The proposed transmitter opens its outlet for a specific time in each time frame to exponentially release a portion of stored molecules to code bit-1 and remains silent to code bit-0. We analyze our model based on different transmission parameters. These parameters are the symbol duration, the release time duration, the storage capacity, and the release and replenishment rate of the storage. We find that the storage outlet must be open for a certain period within the time slot duration in order to improve the performance of the proposed system. Additionally, we demonstrate that determining the effect of storage capacity size can be important for practical MC due to the significant differences between the ideal transmitter and the proposed one, which have a limited size. We show that increases in the transmitter storage size can improve the system performance. As a result, taking a closer look at these practical transmitters is essential to solving the problems and challenges of molecular communication systems.) <|cite_end|>.
\textit{Receivers:} The passive receiver, in which information molecules freely diffuse in the receiver's space and the movement of molecules is not affected, is the most common model used in previous works <|cite_start|> (Reference: Channel modeling for diffusive molecular communication—a tutorial review: Molecular communication (MC) is a new communication engineering paradigm where molecules are employed as information carriers. MC systems are expected to enable new revolutionary applications, such as sensing of target substances in biotechnology, smart drug delivery in medicine, and monitoring of oil pipelines or chemical reactors in industrial settings. As for any other kind of communication, simple yet sufficiently accurate channel models are needed for the design, analysis, and efficient operation of MC systems. In this paper, we provide a tutorial review on mathematical channel modeling for diffusive MC systems. The considered end-to-end MC channel models incorporate the effects of the release mechanism, the MC environment, and the reception mechanism on the observed information molecules. Thereby, the various existing models for the different components of an MC system are presented under a common framework and the underlying biological, chemical, and physical phenomena are discussed. Deterministic models characterizing the expected number of molecules observed at the receiver and statistical models characterizing the actual number of observed molecules are developed. In addition, we provide the channel models for time-varying MC systems with moving transmitters and receivers, which are relevant for advanced applications such as smart drug delivery with mobile nanomachines. For complex scenarios, where simple MC channel models cannot be obtained from first principles, we investigate the simulation- and experiment-driven channel models. Finally, we provide a detailed discussion of potential challenges, open research problems, and future directions in channel modeling for diffusive MC systems.) <|cite_end|>. Such a simple model is often used to facilitate analysis of other aspects of an MC system, e.g., the environment boundary <|cite_start|> (Reference: Diffusive molecular communication in a biological spherical environment with partially absorbing boundary: Diffusive molecular communication (DMC) is envisioned as a promising approach to help realize healthcare applications within bounded biological environments. In this paper, a DMC system within a biological spherical environment (BSE) is considered, inspired by bounded biological sphere-like structures throughout the body. As a biological environment, it is assumed that the inner surface of the sphere’s boundary is fully covered by biological receptors that may irreversibly react with hitting molecules. Moreover, information molecules diffusing in the sphere may undergo a degradation reaction and be transformed to another molecule type. Concentration Green’s function (CGF) of diffusion inside this environment is analytically obtained in terms of a convergent infinite series. By employing the obtained CGF, the information channel between transmitter and transparent receiver of DMC in this environment is characterized. Interestingly, it is revealed that the information channel is reciprocal, i.e., interchanging the position of receiver and transmitter does not change the information channel. Results indicate that the conventional simplifying assumption that the environment is unbounded may lead to an inaccurate characterization in such biological environments.) <|cite_end|>.
The passive model may be relevant for small and hydrophobic molecules (which are repelled from water molecules) that easily pass through a cell membrane.
However, most extracellular molecules are too large or too hydrophilic to traverse the cell membrane <|cite_start|> (Reference: A Survey of Molecular Communication in Cell Biology: Establishing a New Hierarchy for Interdisciplinary Applications: Molecular communication (MC) engineering is inspired by the use of chemical signals as information carriers in cell biology. The biological nature of chemical signaling makes MC a promising methodology for interdisciplinary applications requiring communication between cells and other microscale devices. However, since the life sciences and communications engineering fields have distinct approaches to formulating and solving research problems, the mismatch between them can hinder the translation of research results and impede the development and implementation of interdisciplinary solutions. To bridge this gap, this survey proposes a novel communication hierarchy for MC signaling in cell biology and maps phenomena, contributions, and problems to the hierarchy. The hierarchy includes: 1) the Physical Signal Propagation level; 2) the Physical and Chemical Signal Interaction level; 3) the Signal-Data Interface level; 4) the Local Data Abstraction level; and 5) the Application level. To further demonstrate the proposed hierarchy, it is applied to case studies on quorum sensing, neuronal signaling, and communication via DNA. Finally, several open problems are identified for each level and the integration of multiple levels. The proposed hierarchy provides language for communication engineers to study and interface with biological systems, and also helps biologists to understand how communications engineering concepts can be exploited to interpret, control, and manipulate signaling in cell biology.) <|cite_end|> and also cells usually react with signaling molecules (either directly on the surface or in the intracellular environment).
To address the limitations of the passive receiver model, some works have considered a reception mechanism across the receiver (cell) membrane to activate an internal signaling pathway (i.e., a series of chemical reactions controlling a cell function). These works, including <|cite_start|> (Reference: 3-D Diffusive Molecular Communication with Two Fully-Absorbing Receivers: Hitting Probability and Performance Analysis: Exact analytical channel models for molecular communication via diffusion (MCvD) systems involving multiple fully absorbing receivers (FARs) in a three-dimensional (3- D) medium are hard to obtain due to the mathematical intractability of corresponding diffusion equations. This work, therefore, consider an MCvD system with two spherical FARs in a 3-D diffusion-limited medium and develop several insights using an approximate analytical expression for the hitting probability of information molecule (IM). Further, based on the hitting probability, a novel approximate closed-form analytical expression for the area under the receiver operating characteristic curve (AUC) is derived to analyze the detection performance at each FAR in the presence of other FAR. Finally, simulation results are presented to validate the analytical results using the particle-based and Monte-Carlo simulations and to yield important insights into the MCvD system performance with two FARs) <|cite_end|> <|cite_start|> (Reference: Noise analysis in ligand-binding reception for molecular communication in nanonetworks: Molecular communication (MC) will enable the exchange of information among nanoscale devices. In this novel bio-inspired communication paradigm, molecules are employed to encode, transmit and receive information. In the most general case, these molecules are propagated in the medium by means of free diffusion. An information theoretical analysis of diffusion-based MC is required to better understand the potential of this novel communication mechanism. The study and the modeling of the noise sources is of utmost importance for this analysis. The objective of this paper is to provide a mathematical study of the noise at the reception of the molecular information in a diffusion-based MC system when the ligand-binding reception is employed. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors, where the reception process is realized through ligand-binding chemical receptors. The reception noise is modeled in this paper by following two different approaches, namely, through the ligand-receptor kinetics and through the stochastic chemical kinetics. The ligand-receptor kinetics allows to simulate the random perturbations in the chemical processes of the reception, while the stochastic chemical kinetics provides the tools to derive a closed-form solution to the modeling of the reception noise. The ligand-receptor kinetics model is expressed through a block scheme, while the stochastic chemical kinetics results in the characterization of the reception noise using stochastic differential equations. Numerical results are provided to demonstrate that the analytical formulation of the reception noise in terms of stochastic chemical kinetics is compliant with the reception noise behavior resulting from the ligand-receptor kinetics simulations.) <|cite_end|> <|cite_start|> (Reference: Effect of Receptor Density and Size on Signal Reception in Molecular Communication via Diffusion with an Absorbing Receiver: The performance of molecular communication is significantly impacted by the reception process of the messenger molecules. The receptors' size and density, however, have yet to be investigated. In this letter, we analyze the effect of receptor density and size on the signal reception of an absorbing receiver with receptors. The results show that, when the total receptor area is the same, better hitting probability is achieved by using a higher number of relatively small receptors. In addition, deploying receptors, which cover a small percentage of the receiver surface, is able to create an effective communication channel that has a detectable signal level.) <|cite_end|> <|cite_start|> (Reference: Impact of receiver reaction mechanisms on the performance of molecular communication networks: In a molecular communication network, transmitters and receivers communicate by using signalling molecules. At the receivers, the signalling molecules react, via a chain of chemical reactions, to produce output molecules. The counts of output molecules over time is considered to be the output signal of the receiver. This output signal is used to detect the presence of signalling molecules at the receiver. The output signal is noisy due to the stochastic nature of diffusion and chemical reactions. The aim of this paper is to characterise the properties of the output signals for two types of receivers, which are based on two different types of reaction mechanisms. We derive analytical expressions for the mean, variance and frequency properties of these two types of receivers. These expressions allow us to study the properties of these two types of receivers. In addition, our model allows us to study the effect of the diffusibility of the receiver membrane on the performance of the receivers.) <|cite_end|> <|cite_start|> (Reference: Saturating Receiver and Receptor Competition in Synaptic DMC: Deterministic and Statistical Signal Models: Synaptic communication is based on a biological Molecular Communication (MC) system which may serve as a blueprint for the design of synthetic MC systems. However, the physical modeling of synaptic MC is complicated by the possible saturation of the molecular receiver caused by the competition of neurotransmitters (NTs) for postsynaptic receptors. Receiver saturation renders the system behavior nonlinear in the number of released NTs and is commonly neglected in existing analytical models. Furthermore, due to the ligands' competition for receptors (and vice versa), the individual binding events at the molecular receiver are in general statistically dependent and the binomial model for the statistics of the received signal does not apply. In this work, we propose a novel deterministic model for receptor saturation in terms of a state-space description based on an eigenfunction expansion of Fick's diffusion equation. The presented solution is numerically stable and computationally efficient. Employing the proposed deterministic model, we show that saturation at the molecular receiver reduces the peak-value of the expected received signal and accelerates the clearance of NTs as compared to the case when receptor occupancy is neglected. We further derive a statistical model for the received signal in terms of the hypergeometric distribution which accounts for the competition of NTs for receptors and the competition of receptors for NTs. The proposed statistical model reveals how the signal statistics are shaped by the number of released NTs, the number of receptors, and the binding kinetics of the receptors, respectively, in the presence of competition. We show that the impact of these parameters on the signal variance depends on the relative numbers of NTs and receptors. The accuracy of the proposed deterministic and statistical models is verified by particle-based computer simulations.) <|cite_end|> <|cite_start|> (Reference: Reception modeling of sphere-to-sphere molecular communication via diffusion: ) <|cite_end|>, have studied the effects of various reaction mechanisms across the membrane, \textcolor{black}{including cell membrane receptors that can vary in size, number, and spatial distribution.}
Previous works, as discussed in the preceding paragraphs, have theoretically modeled and simulated the transmitter as a single cell (or machine) releasing molecules with different release mechanisms and the receiver as a single cell detecting the molecules.
However, cells do not normally live in isolation but in populations with other cells of the same or of different types. This is true \textit{in vivo} but also in many \textit{in vitro} systems. In particular, tissues and tumors in multi-cellular organisms and biofilms of microorganisms are common natural instances whereas spheroids, organoids, tumoroids, and cell islets are well-known instances in biological experimental setups. This inspires the design of MC transceivers based on a population of (biological or biosynthetic) cells.
One realistic transceiver for MC is a spheroid structure, which is a 3D cell aggregation \textcolor{black}{of thousands of cells in a spherical shape (e.g., 24000 liver hepatocyte cells in <|cite_start|> (Reference: Functional coupling of human pancreatic islets and liver spheroids on-a-chip: Towards a novel human ex vivo type 2 diabetes model: ) <|cite_end|>)} that is widely used in Organ-on-Chip (OoC) systems.
These \textit{micro-physiological} systems have the promising capability to emulate natural organ-to-organ communication dynamics. For example, a liver-pancreas OoC model with organ-to-organ communication was demonstrated in <|cite_start|> (Reference: Functional coupling of human pancreatic islets and liver spheroids on-a-chip: Towards a novel human ex vivo type 2 diabetes model: ) <|cite_end|>, with the purpose of providing a distinctive platform for replicating type 2 diabetes mellitus. In this system, glucose is controlled through communication between liver spheroids and pancreatic islets. The pancreatic islets release insulin under high glucose conditions, which increases glucose uptake by the liver spheroids and reduces the glucose release from them.
The liver-pancreas OoC inspires biosynthetic MC systems for healthcare applications. As a first attempt to realize this MC system, the initial step is to model and analyze the communication dynamics using information and communication models and tools, which necessitates the biophysical modeling of an individual spheroid.
Several studies have attempted to model the process of mass transport in a spheroidal environment.
The authors of <|cite_start|> (Reference: Modeling the uptake of fluorescent molecules into 3D cellular spheroids: Three mathematical models were developed to analyze the dynamics of fluorescent dyes penetration into 3D cellular spheroids. Two fluorescent dyes were chosen to verify mathematical models: rhodamine 6G (R6G) as a small molecule, which can freely penetrate through the cells, and wheat germ agglutinin (WGA) conjugated with Alexa488 fluorescent label, which reacts with the cells plasma membrane, and its cellular penetration is significantly lower. Dye penetration and binding to cells were modeled with nonlinear diffusion–reaction equations. System of differential equations was solved using numerical methods, and good correspondence with physical experiment was shown. Diffusion coefficients in extracellular matrix were determined for both fluorescent dyes, and the influence of reactions parameters to WGA penetration was analyzed. Dynamics of dyes accumulation into cell spheroids were also determined.) <|cite_end|> modeled the penetration and diffusion of dyes within multicellular spheroids using diffusion-reaction equations. They assumed constant concentrations of molecules at the inside and outside of the outer spheroid's boundary. Paper <|cite_start|> (Reference: Multiscale modelling of drug transport and metabolism in liver spheroids: In early preclinical drug development, potential candidates are tested in the laboratory using isolated cells. These in vitro experiments traditionally involve cells cultured in a two-dimensional monolayer environment. However, cells cultured in three-dimensional spheroid systems have been shown to more closely resemble the functionality and morphology of cells in vivo. While the increasing usage of hepatic spheroid cultures allows for more relevant experimentation in a more realistic biological environment, the underlying physical processes of drug transport, uptake and metabolism contributing to the spatial distribution of drugs in these spheroids remain poorly understood. The development of a multiscale mathematical modelling framework describing the spatio-temporal dynamics of drugs in multicellular environments enables mechanistic insight into the behaviour of these systems. Here, our analysis of cell membrane permeation and porosity throughout the spheroid reveals the impact of these properties on drug penetration, with maximal disparity between zonal metabolism rates occurring for drugs of intermediate lipophilicity. Our research shows how mathematical models can be used to simulate the activity and transport of drugs in hepatic spheroids and in principle any organoid, with the ultimate aim of better informing experimentalists on how to regulate dosing and culture conditions to more effectively optimize drug delivery.) <|cite_end|> focused on mathematically modeling the spatiotemporal dynamics of drugs in spheroids. The authors investigated how drug characteristics impact the propagation of the drug inside a spheroid.
As a first boundary condition at the border of the spheroid, they assumed flow continuity, and as the second boundary condition, they assumed that the concentrations on either side of the outer spherical boundary were equal.
The authors of <|cite_start|> (Reference: Mathematical modelling reveals cellular dynamics within tumour spheroids: Tumour spheroids are widely used as an in vitro assay for characterising the dynamics and response to treatment of different cancer cell lines. Their popularity is largely due to the reproducible manner in which spheroids grow: the diffusion of nutrients and oxygen from the surrounding culture medium, and their consumption by tumour cells, causes proliferation to be localised at the spheroid boundary. As the spheroid grows, cells at the spheroid centre may become hypoxic and die, forming a necrotic core. The pressure created by the localisation of tumour cell proliferation and death generates an cellular flow of tumour cells from the spheroid rim towards its core. Experiments by Dorie et al. showed that this flow causes inert microspheres to infiltrate into tumour spheroids via advection from the spheroid surface, by adding microbeads to the surface of tumour spheroids and observing the distribution over time. We use an off-lattice hybrid agent-based model to re-assess these experiments and establish the extent to which the spatio-temporal data generated by microspheres can be used to infer kinetic parameters associated with the tumour spheroids that they infiltrate. Variation in these parameters, such as the rate of tumour cell proliferation or sensitivity to hypoxia, can produce spheroids with similar bulk growth dynamics but differing internal compositions (the proportion of the tumour which is proliferating, hypoxic/quiescent and necrotic/nutrient-deficient). We use this model to show that the types of experiment conducted by Dorie et al. could be used to infer spheroid composition and parameters associated with tumour cell lines such as their sensitivity to hypoxia or average rate of proliferation, and note that these observations cannot be conducted within previous continuum models of microbead infiltration into tumour spheroids as they rely on resolving the trajectories of individual microbeads.) <|cite_end|> developed an agent-based mathematical model to simulate the passive distribution of microbeads into tumor spheroids while considering the spheroid's growth through externally-supplied oxygen. In their approach, they assumed a constant oxygen concentration at both sides of the spheroid's outer boundary.
Paper <|cite_start|> (Reference: Theoretical analysis of antibody targeting of tumor spheroids: importance of dosage for penetration, and affinity for retention: The interplay among antibody/antigen binding kinetics, antibody diffusion, and antigen metabolic turnover together determines the depth of penetration of antitumor antibodies into prevascular tumor spheroid cell clumps. A sharp boundary between an outer shell of bound high-affinity antibody and an inner antibody-free core has been previously observed and mathematically modeled and was termed the "binding site barrier." We show here that this process is well described by a simplified shrinking core model wherein binding equilibration is much more rapid than diffusion. This analysis provides the following experimentally testable predictions: (a) the binding site barrier is a moving boundary whose velocity is proportional to the time integral of antibody concentration at the spheroid surface (i.e. plasma antibody AUC); (b) the velocity of this moving boundary is independent of binding affinity, if the affinity is sufficiently high to strongly favor antibody/antigen complex formation at prevailing antibody concentrations; and (c) maximum tumor retention is achieved when the antibody/antigen dissociation rate approaches the rate of antigen metabolic turnover. The consistency of these predictions with published experimental results is demonstrated. The shrinking core model provides a simple analytic relationship predicting the effects of altered antibody pharmacokinetics, antibody molecular weight, antigen turnover rate, antigen expression level, and micrometastasis size on antibody penetration and retention. For example, a formula is provided for predicting the bolus dose necessary to accomplish tumor saturation as a function of antibody and tumor properties. Furthermore, this analysis indicates certain attributes necessary for an optimal tumor targeting agent.) <|cite_end|> proposed a model to investigate how factors including the association and dissociation rates, degradation rate, and plasma clearance rate influence the depth of antibody penetration in tumor spheroids. As a boundary condition, they considered the concentration of unbound antibodies inside the spheroid to be equal to that outside multiplied by \textcolor{black}{the porosity parameter or the fraction of tumor volume accessible to the antibody.}
The authors of <|cite_start|> (Reference: Spatio-temporal modeling of nanoparticle delivery to multicellular tumor spheroids: The inefficiency of nanoparticle penetration in tissues limits the therapeutic efficacy of such formulations for cancer applications. Recent work has indicated that modulation of tissue architecture with enzymes such as collagenase significantly increases macromolecule delivery. In this study we developed a mathematical model of nanoparticle penetration into multicellular spheroids that accounts for radially dependent changes in tumor architecture, as represented by the volume fraction of tissue accessible to nanoparticle diffusion. Parameters such as nanoparticle binding, internalization rate constants, and accessible volume fraction were determined experimentally. Unknown parameters of nanoparticle binding sites per cell in the spheroid and pore shape factor were determined by fitting to experimental data. The model was correlated with experimental studies of the penetration of 40 nm nanoparticles in SiHa multicellular spheroids with and without collagenase treatment and was able to accurately predict concentration profiles of nanoparticles within spheroids. The model was also used to investigate the effects of nanoparticle size. This model contributes toward the understanding of the role of tumor architecture on nanoparticle delivery efficiency. Biotechnol. Bioeng. 2008;101: 388–399. © 2008 Wiley Periodicals, Inc.) <|cite_end|> developed a mathematical model to examine the diffusion of nanoparticles into tumor spheroids that considers the structural nonuniformity of the spheroid in the radial direction. They applied a boundary condition that determines the concentration inside the spheroid by multiplying the concentration outside with the radially-dependent spheroid porosity. In <|cite_start|> (Reference: A method for estimating the oxygen consumption rate in multicellular tumour spheroids: Hypoxia occurs when oxygen levels within a tissue drop below normal physiological levels. In tumours, hypoxia is associated with poor prognosis, increased likelihood of metastasis and resistance to therapy. Imaging techniques, for example, positron emission tomography, are increasingly used in the monitoring of tumour hypoxia and have the potential to help in the planning of radiotherapy. For this application, improved understanding of the link between image contrast and quantitative underlying oxygen distribution would be very useful. Mathematical models of tissue hypoxia and image formation can help understand this. Hypoxia is caused by an imbalance between vascular supply and tissue demand. While much work has been dedicated to the quantitative description of tumour vascular networks, consideration of tumour oxygen consumption is largely neglected. Oxidative respiration in standard two-dimensional cell culture has been widely studied. However, two-dimensional culture fails to capture the complexities of growing three-dimensional tissue which could impact on the oxygen usage. In this study, we build on previous descriptions of oxygen consumption and diffusion in three-dimensional tumour spheroids and present a method for estimating rates of oxygen consumption from spheroids, validated using stained spheroid sections. Methods for estimating the local partial pressure of oxygen, the diffusion limit and the extents of the necrotic core, hypoxic region and proliferating rim are also derived. These are validated using experimental data from DLD1 spheroids at different stages of growth. A relatively constant experimentally derived diffusion limit of 232 ± 22 μm and an O2 consumption rate of 7.29 ± 1.4 × 10−7 m3 kg−1 s−1 for the spheroids studied was measured, in agreement with laboratory measurements.) <|cite_end|>, the researchers presented a mathematical model to predict oxygen diffusion and consumption in tumor spheroids. They employed an analytical solution based on the spherical reaction-diffusion equation with a constant concentration at the tumor boundary, which is not dependent on the surrounding concentration.
Furthermore, in our recent study in, we mathematically proved that the amplification is indeed present at the boundary and derived the corresponding amplification factor. The provided theorem was investigated and validated through experiments using liver spheroids and glucose as the target molecules. Interestingly, a rapid decrease in the concentration of molecules in the surrounding medium was observed when a spheroid was introduced to a culture medium with a higher glucose concentration.
In this work, our focus is to model communication between two spheroidal structures, inspired by the cross-talk phenomenon studied in <|cite_start|> (Reference: Functional coupling of human pancreatic islets and liver spheroids on-a-chip: Towards a novel human ex vivo type 2 diabetes model: ) <|cite_end|>. Due to the unique structure of spheroids, this research highlights their potential application in synthetic biology as \textit{spheroidal antennas} for transmitting and receiving information.
The ability of spheroids to release molecules in a controlled manner enables them to function as transmitters. The porous spheroidal structure also increases the received power of the diffusion signal.
In this study, we introduce an end-to-end diffusive MC system using spheroids as both transmitter and receiver. The contributions of this paper are summarized as follows:
\begin{itemize}
\item We propose a novel spheroid-to-spheroid (S2S) communication system in an unbounded environment.
\item We formulate the molecule release function from the border of a transmitting multicell spheroid as a boundary value problem and derive the corresponding Green's function for concentration (GFC).
\item As we did for the impulsive point source transmitter in <|cite_start|> (Reference: Proceedings of IEEE International Conference on Communications, ICC 2013, Budapest, Hungary, June 9-13, 2013: ) <|cite_end|>, the joint communication channel and spheroidal receiver's response are also modeled as a boundary value problem that accounts for the amplification at the boundary.
\item We validate our results with particle-based simulations (PBS), confirming the accuracy and reliability of our model.
\item We study the geometric effects of the transmitting and receiving spheroids on the molecule concentration inside the receiving spheroid.
\item We evaluate the system's bit error rate (BER) in the presence of inter-symbol interference (ISI) with different time slot durations and spheroid porosity levels.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{SM} describes the spheroid structure and diffusive MC system using the spheroid. In Section \ref{GFB}, the spheroid Green's function is provided. The details regarding the characterization of the diffusive MC channel are presented in Section \ref{S2S}. Results and discussions are outlined in Section \ref{results}. Finally, Section \ref{Conclusion} concludes the paper. <|paper_end|> | [
"<|reference_start|> A Physical End-to-End Model for Molecular Communication in Nanonetworks: Molecular communication is a promising paradigm for nanoscale networks. The end-to-end (including the channel) models developed for classical wireless communication networks need to undergo a profound revision so that they can be applied for nanonetworks. Consequently, there is a need to develop new end-to-end (including the channel) models which can give new insights into the design of these nanoscale networks. The objective of this paper is to introduce a new physical end-to-end (including the channel) model for molecular communication. The new model is investigated by means of three modules, i.e., the transmitter, the signal propagation and the receiver. Each module is related to a specific process involving particle exchanges, namely, particle emission, particle diffusion and particle reception. The particle emission process involves the increase or decrease of the particle concentration rate in the environment according to a modulating input signal. The particle diffusion provides the propagation of particles from the transmitter to the receiver by means of the physics laws underlying particle diffusion in the space. The particle reception process is identified by the sensing of the particle concentration value at the receiver location. Numerical results are provided for three modules, as well as for the overall end-to-end model, in terms of normalized gain and delay as functions of the input frequency and of the transmission range. <|reference_end|>",
"<|reference_start|> Diffusion-based molecular communication with limited molecule production rate: This paper studies the impact of a transmitter’s molecule generation process on the capacity of a concentration-based molecular communication (MC) system. Constraints caused by the molecule generation process affect the availability of the molecules at the transmitter. The transmitter has a storage of molecules, and should decide whether to release or save the currently produced molecules. As a result, the MC system has conceptual connections with energy harvesting systems. In this paper, we consider two scenarios on the propagation channel. The first scenario assumes a channel with no inter-symbol interference (ISI), i.e., a memoryless channel. We derive bounds on the capacity of the MC system in this scenario. The second scenario assumes an MC channel with ISI, in which the output of the channel depends on the history of released molecules in the previous time-slots. Based on the assumptions that either the transmitter or the receiver knows the channel statistics, we compute a lower bound on the channel capacity. <|reference_end|>",
"<|reference_start|> Functional coupling of human pancreatic islets and liver spheroids on-a-chip: Towards a novel human ex vivo type 2 diabetes model: <|reference_end|>",
"<|reference_start|> Multiscale modelling of drug transport and metabolism in liver spheroids: In early preclinical drug development, potential candidates are tested in the laboratory using isolated cells. These in vitro experiments traditionally involve cells cultured in a two-dimensional monolayer environment. However, cells cultured in three-dimensional spheroid systems have been shown to more closely resemble the functionality and morphology of cells in vivo. While the increasing usage of hepatic spheroid cultures allows for more relevant experimentation in a more realistic biological environment, the underlying physical processes of drug transport, uptake and metabolism contributing to the spatial distribution of drugs in these spheroids remain poorly understood. The development of a multiscale mathematical modelling framework describing the spatio-temporal dynamics of drugs in multicellular environments enables mechanistic insight into the behaviour of these systems. Here, our analysis of cell membrane permeation and porosity throughout the spheroid reveals the impact of these properties on drug penetration, with maximal disparity between zonal metabolism rates occurring for drugs of intermediate lipophilicity. Our research shows how mathematical models can be used to simulate the activity and transport of drugs in hepatic spheroids and in principle any organoid, with the ultimate aim of better informing experimentalists on how to regulate dosing and culture conditions to more effectively optimize drug delivery. <|reference_end|>"
] | [
4,
7,
20,
22
] | {"<|cite_1|>": "arxiv-67428", "<|cite_2|>": "ss-774142", "<|cite_3|>": "ss-681447", "<|cite_4|>": "arxiv-96072", "<|cite_5|>": "ss-1709660", "<|cite_6|>": "ss-1126223", "<|cite_7|>": "ss-1177788", "<|cite_8|>": "ss-1987332", "<|cite_9|>": "arxiv-215336", "<|cite_10|>": "ss-1869215", "<|cite_11|>": "ss-774142", "<|cite_12|>": "ss-2190151", "<|cite_13|>": "arxiv-287458", "<|multi_cite_14_1|>": "arxiv-264646", "<|multi_cite_14_2|>": "ss-1289443", "<|multi_cite_14_3|>": "arxiv-69196", "<|multi_cite_14_4|>": "arxiv-53547", "<|multi_cite_14_5|>": "arxiv-326299", "<|multi_cite_14_6|>": "ss-1448715", "<|cite_15|>": "ss-2258510", "<|cite_16|>": "ss-2258510", "<|cite_17|>": "ss-2258511", "<|cite_18|>": "ss-2258512", "<|cite_19|>": "ss-2258513", "<|cite_20|>": "ss-2258514", "<|cite_21|>": "ss-2258515", "<|cite_22|>": "ss-2258516", "<|cite_24|>": "ss-2258510", "<|cite_25|>": "ss-1392962"} |
1706.09268 | <|paper_start|> Title: Personalized advice for enhancing well-being using automated impulse response analysis --- AIRA
Abstract: Personalized advice for enhancing well-being using automated impulse response analysis --- AIRA: The attention for personalized mental health care is thriving. Research data specific to the individual, such as time series sensor data or data from intensive longitudinal studies, is relevant from a research perspective, as analyses on these data can reveal the heterogeneity among the participants and provide more precise and individualized results than with group-based methods. However, using this data for self-management and to help the individual to improve his or her mental health has proven to be challenging. The present work describes a novel approach to automatically generate personalized advice for the improvement of the well-being of individuals by using time series data from intensive longitudinal studies: Automated Impulse Response Analysis (AIRA). AIRA analyzes vector autoregression models of well-being by generating impulse response functions. These impulse response functions are used in simulations to determine which variables in the model have the largest influence on the other variables and thus on the well-being of the participant. The effects found can be used to support self-management. We demonstrate the practical usefulness of AIRA by performing analysis on longitudinal self-reported data about psychological variables. To evaluate its effectiveness and efficacy, we ran its algorithms on two data sets ($N=4$ and $N=5$), and discuss the results. Furthermore, we compare AIRA's output to the results of a previously published study and show that the results are comparable. By automating Impulse Response Function Analysis, AIRA fulfills the need for accurate individualized models of health outcomes at a low resource cost with the potential for upscaling.
Introduction
\label{sec:introduction}
Perspectives on \emph{eHealth} (computer aided health care) and \emph{mHealth} (mobile health) have changed greatly in the last decade <|cite_start|> (Reference: eHealth: Extending, Enhancing, and Evolving Health Care: eHealth holds the promise of revolutionizing health care by improving its efficiency; extending and enhancing its reach; energizing and engaging its practitioners and their patients; and in the process, democratizing, decentralizing, and even partially demystifying the practice of medicine. In emerging and developing countries, the use of eHealth and smart health-care planning has the potential to expand access to necessary treatments and prevention services that can serve as underpinnings of rapid economic development. In developed countries, the application of eHealth promises to restructure the business model of health-care delivery, while at the same time improving and personalizing the quality of care received. This article reviews the past, present, and future of eHealth in an effort to illuminate the potential of its impact.) <|cite_end|> <|cite_start|> (Reference: Mapping mHealth Research: A Decade of Evolution: Background For the last decade, mHealth has constantly expanded as a part of eHealth. Mobile applications for health have the potential to target heterogeneous audiences and address specific needs in different situations, with diverse outcomes, and to complement highly developed health care technologies. The market is rapidly evolving, making countless new mobile technologies potentially available to the health care system; however, systematic research on the impact of these technologies on health outcomes remains scarce. Objective To provide a comprehensive view of the field of mHealth research to date and to understand whether and how the new generation of smartphones has triggered research, since their introduction 5 years ago. Specifically, we focused on studies aiming to evaluate the impact of mobile phones on health, and we sought to identify the main areas of health care delivery where mobile technologies can have an impact. Methods A systematic literature review was conducted on the impact of mobile phones and smartphones in health care. Abstracts and articles were categorized using typologies that were partly adapted from existing literature and partly created inductively from publications included in the review. Results The final sample consisted of 117 articles published between 2002 and 2012. The majority of them were published in the second half of our observation period, with a clear upsurge between 2007 and 2008, when the number of articles almost doubled. The articles were published in 77 different journals, mostly from the field of medicine or technology and medicine. Although the range of health conditions addressed was very wide, a clear focus on chronic conditions was noted. The research methodology of these studies was mostly clinical trials and pilot studies, but new designs were introduced in the second half of our observation period. The size of the samples drawn to test mobile health applications also increased over time. The majority of the studies tested basic mobile phone features (eg, text messaging), while only a few assessed the impact of smartphone apps. Regarding the investigated outcomes, we observed a shift from assessment of the technology itself to assessment of its impact. The outcome measures used in the studies were mostly clinical, including both self-reported and objective measures. Conclusions Research interest in mHealth is growing, together with an increasing complexity in research designs and aim specifications, as well as a diversification of the impact areas. However, new opportunities offered by new mobile technologies do not seem to have been explored thus far. Mapping the evolution of the field allows a better understanding of its strengths and weaknesses and can inform future developments.) <|cite_end|>. The modern individual possesses more mobile technology and \q{smart} devices than ever before. Some of these devices, such as smartphones and tablets, enable people to have Internet access during a large part of their daily life, allowing the use of new and more accurate methods to perform assessments in health care and health research <|cite_start|> (Reference: Using experience sampling methods/ecological momentary assessment (ESM/EMA) in clinical assessment and clinical research: introduction to the special section.: This article introduces the special section on experience sampling methods and ecological momentary assessment in clinical assessment. We review the conceptual basis for experience sampling methods (ESM; Csikszentmihalyi & Larson, 1987) and ecological momentary assessment (EMA; Stone & Shiffman, 1994). Next, we highlight several advantageous features of ESM/EMA as applied to psychological assessment and clinical research. We provide a brief overview of the articles in this special section, each of which focuses on 1 of the following major classes of psychological disorders: mood disorders and mood dysregulation (Ebner-Priemer & Trull, 2009), anxiety disorders (Alpers, 2009), substance use disorders (Shiffman, 2009), and psychosis (Oorschot, Kwapil, Delespaul, & Myin-Germeys, 2009). Finally, we discuss prospects, future challenges, and limitations of ESM/EMA.) <|cite_end|>. Research for which a large number of assessments need to be conducted (possibly multiple times per day) can now be performed digitally using mobile technology. Technology enables researchers to carry out studies on a larger scale than would have been possible using traditional methods (i.e., collecting data using pencil and paper, using conventional postal methods for sending the data, and processing this data manually). Moreover, the use of technology can provide means for interactive and automated analysis methods. Using an automated method for analyzing the data might even be inevitable in large-scale studies. As vast amounts of data are collected for ever larger groups of people, the data might grow too large for manual analysis. Additionally, manual analysis can result in inconsistent or opinionated outcomes, which can be reduced by automatizing the procedure.
\subsection{Personalizing mental health research}
\label{sub:personalizing_psychopathology_research}
In recent years, it has increasingly been acknowledged that mental health research requires a person-centered approach and should not focus only on group-based averages <|cite_start|> (Reference: Why Can't We Be More Idiographic in Our Research?: Most psychological scientists make inferences about the relations among variables of interest by comparing aggregated data from groups of individuals. Although this method is unarguably a useful one that will continue to yield scientific advances, important limitations exist regarding the efficiency and flexibility of such designs, as well as with the generality of obtained results. Idiographic research strategies, which focus on the intensive study of individual organisms over time, offer a proficient and flexible alternative to group comparison designs; however, they are rarely taught in graduate training programs and are seldom used by psychological scientists. We highlight some of the unique strengths of idiographic methods, such as single case experimental designs, and suggest that psychological science will progress most efficiently with an increased use of such methods in both laboratory and clinical settings.) <|cite_end|> <|cite_start|> (Reference: Why researchers should think "within-person": A paradigmatic rationale.: ) <|cite_end|> <|cite_start|> (Reference: On the necessity to use person-specific data analysis approaches in psychology: It is explained, both by considering earlier theoretical discussions as well as by providing analytic proof, why it is important to base analyses of developmental processes on intra-individual variation (replicated time series analysis) instead of on inter-individual variation. It is often found that replicated time series analysis yields different (person-specific) dynamic model structures across replications. A new powerful approach is presented that enables the identification of a valid common (nomothetic) dynamic model if individual replications yield person-specific dynamic model structures.) <|cite_end|>. Since group-based analysis applies to the average person, several researchers have argued that conclusions from group-based analysis might not hold for \emph{any} individual <|cite_start|> (Reference: DO PEOPLE REALLY ADAPT TO MARRIAGE?: ) <|cite_end|> <|cite_start|> (Reference: The New Person-Specific Paradigm in Psychology: Most research methodology in the behavioral sciences employs interindividual analyses, which provide information about the state of affairs of the population. However, as shown by classical mathematical-statistical theorems (the ergodic theorems), such analyses do not provide information for, and cannot be applied at, the level of the individual, except on rare occasions when the processes of interest meet certain stringent conditions. When psychological processes violate these conditions, the interindividual analyses that are now standardly applied have to be replaced by analysis of intraindividual variation in order to obtain valid results. Two illustrations involving analysis of intraindividual variation of personality and emotional processes are given.) <|cite_end|>. Instead of using a few data points of many people as the starting point for research, the focus needs to shift towards using many data points of individuals. This allows for studying changes within the individual (\emph{within-person variability}). Several studies have shown that mental health and ill-health are dynamic phenomena that vary highly between individuals and over time <|cite_start|> (Reference: Revealing Causal Heterogeneity Using Time Series Analysis of Ambulatory Assessments: Application to the Association Between Depression and Physical Activity After Myocardial Infarction: Objective Studies in psychosomatic medicine are characterized by analyses that typically compare groups. This nomothetic approach leads to conclusions that apply to the average group member but not necessarily to individual patients. Idiographic studies start at the individual patient and are suitable to study associations that differ between time points or between individuals. We illustrate the advantages of the idiographic approach in analyzing ambulatory assessments, taking the association between depression and physical activity after myocardial infarction as an example. Methods Five middle-aged men who had myocardial infarction with mild to moderate symptoms of depression were included in this study. Four of these participants monitored their physical activity and depressive symptoms during a period of 2 to 3 months using a daily self-registration form. The time series of each individual participant were investigated using vector autoregressive modeling, which enables the analysis of temporal dynamics between physical activity and depression. Results We found causal heterogeneity in the association between depression and physical activity. Participants differed in the predominant direction of effect, which was either from physical activity to depression (n = 1, 85 observations, unstandardized effect size = −0.183, p = .03) or from depression to physical activity (n = 2, 65 and 59 observations, unstandardized effect sizes = −0.038 and −0.381, p < .001 and p = .04). Also, the persistency of effects differed among individuals. Conclusions Vector autoregressive models are suitable in revealing causal heterogeneity and can be easily used to analyze ambulatory assessments. We suggest that these models might bridge the gap between science and clinical practice by translating epidemiological results to individual patients. Abbreviations PEP = Psycho-Educational Prevention Module BDI = Beck Depression Inventory PCI = percutaneous coronary intervention CABG = coronary artery bypass graft LVEF = left ventricular ejection fraction BMI = body mass index VAR = vector autoregressive modeling) <|cite_end|> <|cite_start|> (Reference: Temporal relationship between dysfunctional beliefs, self-efficacy and panic apprehension in the treatment of panic disorder with agoraphobia.: ) <|cite_end|> <|cite_start|> (Reference: HowNutsAreTheDutch (HoeGekIsNL): A crowdsourcing study of mental symptoms and strengths: HowNutsAreTheDutch (Dutch: HoeGekIsNL) is a national crowdsourcing study designed to investigate multiple continuous mental health dimensions in a sample from the general population (n = 12,503). Its main objective is to create an empirically based representation of mental strengths and vulnerabilities, accounting for (i) dimensionality and heterogeneity, (ii) interactivity between symptoms and strengths, and (iii) intra-individual variability. To do so, HowNutsAreTheDutch (HND) makes use of an internet platform that allows participants to (a) compare themselves to other participants via cross-sectional questionnaires and (b) to monitor themselves three times a day for 30 days with an intensive longitudinal diary study via their smartphone. These data enable for personalized feedback to participants, a study of profiles of mental strengths and weaknesses, and zooming into the fine-grained level of dynamic relationships between variables over time. Measuring both psychiatric symptomatology and mental strengths and resources enables for an investigation of their interactions, which may underlie the wide variety of observed mental states in the population. The present paper describes the applied methods and technology, and presents the sample characteristics. Copyright © 2015 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Temporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback: Objective Recent developments in research and mobile health enable a quantitative idiographic approach in health research. The present study investigates the potential of an electronic diary crowdsourcing study in the Netherlands for (1) large-scale automated self-assessment for individual-based health promotion and (2) enabling research at both the between-persons and within-persons level. To illustrate the latter, we examined between-persons and within-persons associations between somatic symptoms and quality of life. Methods A website provided the general Dutch population access to a 30-day (3 times a day) diary study assessing 43 items related to health and well-being, which gave participants personalized feedback. Associations between somatic symptoms and quality of life were examined with a linear mixed model. Results A total of 629 participants completed 28,430 assessments, with a mean (SD) of 45 (32) assessments per participant. Most participants (n = 517 [82%]) were women and 531 (84%) had high education. Almost 40% of the participants (n = 247) completed enough assessments (t = 68) to generate personalized feedback including temporal dynamics between well-being, health behavior, and emotions. Substantial between-person variability was found in the within-person association between somatic symptoms and quality of life. Conclusions We successfully built an application for automated diary assessments and personalized feedback. The application was used by a sample of mainly highly educated women, which suggests that the potential of our intensive diary assessment method for large-scale health promotion is limited. However, a rich data set was collected that allows for group-level and idiographic analyses that can shed light on etiological processes and may contribute to the development of empirical-based health promotion solutions.) <|cite_end|>. A technique to assess this within-person variability of mental health is the \emph{Ecological Momentary Assessment} method (\textsc{ema}) <|cite_start|> (Reference: Ecological Momentary Assessment: A New Tool for Behavioral Medicine Research: ) <|cite_end|>, in which a longitudinal data set is created by asking the individual to repeatedly complete the same assessment, for a certain period of time (e.g., in the morning, afternoon, and evening, for thirty days in a row).
\subsection{Analyzing diary study data}
\label{sub:analyzing_time_series_data}
When questionnaires are filled out in sequence the data is a time series. A popular technique for analyzing multivariate, equally spaced time series data is \emph{vector autoregression} (\textsc{var}) <|cite_start|> (Reference: Macroeconomics and Reality: Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it is argued that taken together they make it unlikely that macroeconomic models are in fact over identified, as the existing statistical theory usually assumes. The implications of this conclusion are explored, and an example of econometric work in a non-standard style, taking account of the objections to the standard style, is presented. THE STUDY OF THE BUSINESS cycle, fluctuations in aggregate measures of economic activity and prices over periods from one to ten years or so, constitutes or motivates a large part of what we call macroeconomics. Most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest, and would agree further that fluctuations in these series are interrelated. It would seem to follow almost tautologically that statistical models involving large numbers of macroeconomic variables ought to be the arena within which macroeconomic theories confront reality and thereby each other. Instead, though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models. In this lecture I intend to discuss some aspects of this situation, attempting both to offer some explanations and to suggest some means for improvement. I will argue that the style in which their builders construct claims for a connection between these models and reality-the style in which "identification" is achieved for these models-is inappropriate, to the point at which claims for identification in these models cannot be taken seriously. This is a venerable assertion; and there are some good old reasons for believing it;2 but there are also some reasons which have been more recently put forth. After developing the conclusion that the identification claimed for existing large-scale models is incredible, I will discuss what ought to be done in consequence. The line of argument is: large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification; the restrictions imposed in the usual style of identification are neither essential to constructing a model which can perform these functions nor innocuous; an alternative style of identification is available and practical. Finally we will look at some empirical work based on an alternative style of macroeconometrics. A six-variable dynamic system is estimated without using 1 Research for this paper was supported by NSF Grant Soc-76-02482. Lars Hansen executed the computations. The paper has benefited from comments by many people, especially Thomas J. Sargent) <|cite_end|>. \textsc{Var} can be used to fit a multivariate regression model; a model in which the outcome of one variable (e.g., \q{concentration}) is regressed on the outcomes of several other variables (e.g., \q{self-esteem} and \q{agitation}). The \textsc{var} model itself is a set of such multivariate regression equations for a system of two or more variables, where each variable in the system is regressed on its own time-lagged values and the time-lagged values of all other variables in the system <|cite_start|> (Reference: Multiple time series models: List of Figures List of Tables Series Editor?s Introduction Preface 1. Introduction to Multiple Time Series Models 1.1 Simultaneous Equation Approach 1.2 ARIMA Approach 1.3 Error Correction or LSE Approach 1.4 Vector Autoregression Approach 1.5 Comparison and Summary 2. Basic Vector Autoregression Models 2.1 Dynamic Structural Equation Models 2.2 Reduced Form Vector Autoregressions 2.3 Relationship of a Dynamic Structural Equation Model to a Vector Autoregression Model 2.4 Working With This Model 2.5 Specification and Analysis of VAR Models 2.6 Other Specification Issues 2.7 Unit Roots and Error Correction in VARs 2.8 Criticisms of VAR 3. Examples of VAR Analyses 3.1 Public Mood and Macropartisanship 3.2 Effective Corporate Tax Rates 3.3 Conclusion Appendix: Software for Multiple Time Series Models Notes References Index About the Authors) <|cite_end|>. That is, a variable $x$ at time $t$ is predicted by the same variable $x$ at time $t-1, t-2, \ldots, t-p$ and by other variables at time $t-1, t-2, \ldots, t-p$. The number of measurements used to look back in time ($p$) are called \emph{lags} in time series parlance.
\textsc{Var} models allow for determining \emph{Granger causality} <|cite_start|> (Reference: {Investigating causal relations by econometric models and cross-spectral methods: There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recordhag information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalization of this result with the partial cross spectrum is suggested.The object of this paper is to throw light on the relationships between certain classes of econometric models involving feedback and the functions arising in spectral analysis, particularly the cross spectrum and the partial cross spectrum. Causality and feedback are here defined in an explicit and testable fashion. It is shown that in the two-variable case the feedback mechanism can be broken down into two causal relations and that the cross spectrum can be considered as the sum of two cross spectra, each closely connected with one of the causations. The next three sections of the paper briefly introduce those aspects of spectral methods, model building, and causality which are required later. Section IV presents the results for the two-variable case and Section V generalizes these results for three variables.) <|cite_end|>. A variable $x$ \emph{Granger causes} another variable $y$, if and only if the variance of $y$ can be better explained by lagged values of $y$ and lagged values of another variable ($x$), than lagged values of $y$ alone <|cite_start|> (Reference: {Investigating causal relations by econometric models and cross-spectral methods: There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recordhag information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalization of this result with the partial cross spectrum is suggested.The object of this paper is to throw light on the relationships between certain classes of econometric models involving feedback and the functions arising in spectral analysis, particularly the cross spectrum and the partial cross spectrum. Causality and feedback are here defined in an explicit and testable fashion. It is shown that in the two-variable case the feedback mechanism can be broken down into two causal relations and that the cross spectrum can be considered as the sum of two cross spectra, each closely connected with one of the causations. The next three sections of the paper briefly introduce those aspects of spectral methods, model building, and causality which are required later. Section IV presents the results for the two-variable case and Section V generalizes these results for three variables.) <|cite_end|>. Granger causal relations can be depicted by means of a weighted directed graph. \figref{fig:images_dynamic} gives an example of such a graph, in which the nodes represent the variables (which can be self-reported psychological variables, such as measured in the present study, physiological variables as read from sensors, or other variables), the connections (or edges) represent the significant directed relations between the variables and the thickness of an edge represents the strength of the relation. In this image, the green nodes represent variables that participants usually experience as a positive phenomenon and the red nodes represent variables that participants usually experience as a negative phenomenon. In \figref{fig:images_dynamic}, for example, an increase in the variable \q{agitation} at time $t-1$ Granger causes an increase in \q{rumination} and a decrease in \q{self-esteem}, \q{concentration}, \q{cheerfulness}, and \q{eating candy} at time $t$. The figure only shows relations present at lag 1.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\columnwidth]{images/network_v2.png}
\caption{An example of a Granger causality graph, showing the relations between variables over time.}
\label{fig:images_dynamic}
\end{figure}
Graphs like the one depicted in \figref{fig:images_dynamic} encompass relevant information regarding the interactions in a \textsc{var} model that could be of interest to the participant. However, these graphs lack several important features to serve as a means to provide advice on how to improve the participant's well-being. Firstly, participants may have a hard time understanding these graphs <|cite_start|> (Reference: Temporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback: Objective Recent developments in research and mobile health enable a quantitative idiographic approach in health research. The present study investigates the potential of an electronic diary crowdsourcing study in the Netherlands for (1) large-scale automated self-assessment for individual-based health promotion and (2) enabling research at both the between-persons and within-persons level. To illustrate the latter, we examined between-persons and within-persons associations between somatic symptoms and quality of life. Methods A website provided the general Dutch population access to a 30-day (3 times a day) diary study assessing 43 items related to health and well-being, which gave participants personalized feedback. Associations between somatic symptoms and quality of life were examined with a linear mixed model. Results A total of 629 participants completed 28,430 assessments, with a mean (SD) of 45 (32) assessments per participant. Most participants (n = 517 [82%]) were women and 531 (84%) had high education. Almost 40% of the participants (n = 247) completed enough assessments (t = 68) to generate personalized feedback including temporal dynamics between well-being, health behavior, and emotions. Substantial between-person variability was found in the within-person association between somatic symptoms and quality of life. Conclusions We successfully built an application for automated diary assessments and personalized feedback. The application was used by a sample of mainly highly educated women, which suggests that the potential of our intensive diary assessment method for large-scale health promotion is limited. However, a rich data set was collected that allows for group-level and idiographic analyses that can shed light on etiological processes and may contribute to the development of empirical-based health promotion solutions.) <|cite_end|>. This can be attributed to the conceptual complexity of the different edge and node types in the graph. Secondly, these graphs give a general overview of the coefficients in a \textsc{var} model by providing an edge-focused representation. Although such a representation gives information about the individual relations between nodes, it remains complicated to interpret the model as a whole, especially with respect to the temporal interplay between the nodes in the model. The \textsc{var} coefficients are meaningless to interpret individually, as it is the \textsc{var} model as a whole that describes the complete dynamic behavior of the variables in the system <|cite_start|> (Reference: Multiple time series models: List of Figures List of Tables Series Editor?s Introduction Preface 1. Introduction to Multiple Time Series Models 1.1 Simultaneous Equation Approach 1.2 ARIMA Approach 1.3 Error Correction or LSE Approach 1.4 Vector Autoregression Approach 1.5 Comparison and Summary 2. Basic Vector Autoregression Models 2.1 Dynamic Structural Equation Models 2.2 Reduced Form Vector Autoregressions 2.3 Relationship of a Dynamic Structural Equation Model to a Vector Autoregression Model 2.4 Working With This Model 2.5 Specification and Analysis of VAR Models 2.6 Other Specification Issues 2.7 Unit Roots and Error Correction in VARs 2.8 Criticisms of VAR 3. Examples of VAR Analyses 3.1 Public Mood and Macropartisanship 3.2 Effective Corporate Tax Rates 3.3 Conclusion Appendix: Software for Multiple Time Series Models Notes References Index About the Authors) <|cite_end|>.
\subsection{Automated Impulse Response Analysis}
\label{sub:advanced_impulse_response_analysis}
In the present work, we describe \emph{Automated Impulse Response Analysis} (\textsc{aira}), an approach to automatically generate advice for improving a participant's well-being using \textsc{var} models derived from \textsc{ema} data. \textsc{Aira} creates advice by simulating the interactions between variables in a \textsc{var} model (i.e., showing what would happen to $y$ when variable $x$ increases). The technique \textsc{aira} uses is called \emph{Impulse Response Function} (\textsc{irf}) analysis. \textsc{Irf} analysis allows us to \emph{shock} (that is, give an instantaneous exogenous impulse to) certain variables to see how this shock propagates through the various (time-lagged) relations in the \textsc{var} model. In other words, \textsc{irf} shows how variables respond to an impulse applied to other variables <|cite_start|> (Reference: Multiple time series models: List of Figures List of Tables Series Editor?s Introduction Preface 1. Introduction to Multiple Time Series Models 1.1 Simultaneous Equation Approach 1.2 ARIMA Approach 1.3 Error Correction or LSE Approach 1.4 Vector Autoregression Approach 1.5 Comparison and Summary 2. Basic Vector Autoregression Models 2.1 Dynamic Structural Equation Models 2.2 Reduced Form Vector Autoregressions 2.3 Relationship of a Dynamic Structural Equation Model to a Vector Autoregression Model 2.4 Working With This Model 2.5 Specification and Analysis of VAR Models 2.6 Other Specification Issues 2.7 Unit Roots and Error Correction in VARs 2.8 Criticisms of VAR 3. Examples of VAR Analyses 3.1 Public Mood and Macropartisanship 3.2 Effective Corporate Tax Rates 3.3 Conclusion Appendix: Software for Multiple Time Series Models Notes References Index About the Authors) <|cite_end|>. \textsc{Aira} generates the impulse response functions for each of the equations in a \textsc{var} model, and analyzes these \textsc{irf}s to automatically generate personalized advice. \textsc{Aira} uses and partly extends some of our previous work; the automatic creation of \textsc{var} models <|cite_start|> (Reference: Automating Vector Autoregression on Electronic Patient Diary Data: Finding the best vector autoregression model for any dataset, medical or otherwise, is a process that, to this day, is frequently performed manually in an iterative manner requiring a statistical expertize and time. Very few software solutions for automating this process exist, and they still require statistical expertize to operate. We propose a new application called Autovar, for the automation of finding vector autoregression models for time series data. The approach closely resembles the way in which experts work manually. Our proposal offers improvements over the manual approach by leveraging computing power, e.g., by considering multiple alternatives instead of choosing just one. In this paper, we describe the design and implementation of Autovar, we compare its performance against experts working manually, and we compare its features to those of the most used commercial solution available today. The main contribution of Autovar is to show that vector autoregression on a large scale is feasible. We show that an exhaustive approach for model selection can be relatively safe to use. This study forms an important step toward making adaptive, personalized treatment available and affordable for all branches of healthcare.) <|cite_end|>. The fact that \textsc{aira} analyzes the \textsc{var} model as a whole enables \textsc{aira} to be a more appropriate and more precise technique for analyzing a \textsc{var} model than a mere manual inspection of said model. A second novelty of the present work is an implementation of \textsc{var} and \textsc{irf} analysis in the JavaScript-language. As far as we know, this is the first openly available cross-platform web-based implementation of its kind. The JavaScript implementation can be useful for calculating \textsc{var} models or \textsc{irf}s in the browser, or on a server running for example \emph{NodeJS\footnote{Website: http://nodejs.org}}, which can aid upscaling.
\textsc{Aira} generates several types of advice answering the following questions: \rom{1} \emph{Which of the variables has the largest effect on my well-being?}, \rom{2} \emph{How long is $Y$ affected by an increase in $X$?}, and \rom{3} \emph{What can I do to change a certain $Y$ variable?} Firstly, \textsc{aira} shows how well each of the modeled variables can be used to improve all other variables in the network, by summing over the effects variables have on the other variables. For this type of advice, we consider an improvement of the complete network an improvement of the participant's well-being in general. Secondly, \textsc{aira} can provide insight into the duration of an effect, giving insight in the persistence of a perceived relation. Thirdly, \textsc{aira} allows the participant to select a variable he or she would like to improve and by how much, after which \textsc{aira} will try to find a suitable solution to achieve this improvement. \textsc{Aira} will iterate over all other variables and estimate for each of these variables how large a change is needed to achieve the desired effect.
This paper is organized as follows. \secref{sec:related_work} gives an overview of related work. \secref{sec:variable_selection} illustrates the concept of \textsc{aira} presenting its mathematical foundation. \secref{sec:implementation} describes \textsc{aira} by presenting pseudocode of the algorithms. \secref{sec:experimental_results} describes the experimental results acquired when evaluating \textsc{aira}. We also evaluate the implementation of \textsc{aira} by comparing its analysis with a manually performed analysis. \secref{sec:discussion} discusses the results. \secref{sec:conclusion_and_future_work} concludes the work and provides direction for future work. <|paper_end|> | [
"<|reference_start|> Why researchers should think \"within-person\": A paradigmatic rationale.: <|reference_end|>",
"<|reference_start|> The New Person-Specific Paradigm in Psychology: Most research methodology in the behavioral sciences employs interindividual analyses, which provide information about the state of affairs of the population. However, as shown by classical mathematical-statistical theorems (the ergodic theorems), such analyses do not provide information for, and cannot be applied at, the level of the individual, except on rare occasions when the processes of interest meet certain stringent conditions. When psychological processes violate these conditions, the interindividual analyses that are now standardly applied have to be replaced by analysis of intraindividual variation in order to obtain valid results. Two illustrations involving analysis of intraindividual variation of personality and emotional processes are given. <|reference_end|>",
"<|reference_start|> Ecological Momentary Assessment: A New Tool for Behavioral Medicine Research: <|reference_end|>",
"<|reference_start|> Macroeconomics and Reality: Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it is argued that taken together they make it unlikely that macroeconomic models are in fact over identified, as the existing statistical theory usually assumes. The implications of this conclusion are explored, and an example of econometric work in a non-standard style, taking account of the objections to the standard style, is presented. THE STUDY OF THE BUSINESS cycle, fluctuations in aggregate measures of economic activity and prices over periods from one to ten years or so, constitutes or motivates a large part of what we call macroeconomics. Most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest, and would agree further that fluctuations in these series are interrelated. It would seem to follow almost tautologically that statistical models involving large numbers of macroeconomic variables ought to be the arena within which macroeconomic theories confront reality and thereby each other. Instead, though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models. In this lecture I intend to discuss some aspects of this situation, attempting both to offer some explanations and to suggest some means for improvement. I will argue that the style in which their builders construct claims for a connection between these models and reality-the style in which \"identification\" is achieved for these models-is inappropriate, to the point at which claims for identification in these models cannot be taken seriously. This is a venerable assertion; and there are some good old reasons for believing it;2 but there are also some reasons which have been more recently put forth. After developing the conclusion that the identification claimed for existing large-scale models is incredible, I will discuss what ought to be done in consequence. The line of argument is: large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification; the restrictions imposed in the usual style of identification are neither essential to constructing a model which can perform these functions nor innocuous; an alternative style of identification is available and practical. Finally we will look at some empirical work based on an alternative style of macroeconometrics. A six-variable dynamic system is estimated without using 1 Research for this paper was supported by NSF Grant Soc-76-02482. Lars Hansen executed the computations. The paper has benefited from comments by many people, especially Thomas J. Sargent <|reference_end|>"
] | [
4,
7,
12,
13
] | {"<|multi_cite_1_1|>": "ss-807128", "<|multi_cite_1_2|>": "ss-807129", "<|cite_3|>": "ss-2503578", "<|multi_cite_4_1|>": "ss-807130", "<|multi_cite_4_2|>": "ss-807131", "<|multi_cite_4_3|>": "ss-807132", "<|multi_cite_5_1|>": "ss-807133", "<|multi_cite_5_2|>": "ss-807134", "<|multi_cite_6_1|>": "ss-807135", "<|multi_cite_6_2|>": "ss-807136", "<|multi_cite_6_3|>": "ss-807137", "<|multi_cite_6_4|>": "ss-807138", "<|cite_7|>": "ss-807139", "<|cite_8|>": "ss-1194973", "<|cite_9|>": "ss-807140", "<|cite_10|>": "ss-838302", "<|cite_11|>": "ss-838302", "<|cite_12|>": "ss-807138", "<|cite_13|>": "ss-807140", "<|cite_14|>": "ss-807140", "<|cite_15|>": "ss-807141"} |
2006.13070 | <|paper_start|> Title: Normalizing Flows Across Dimensions
Abstract: Normalizing Flows Across Dimensions: Real-world data with underlying structure, such as pictures of faces, are hypothesized to lie on a low-dimensional manifold. This manifold hypothesis has motivated state-of-the-art generative algorithms that learn low-dimensional data representations. Unfortunately, a popular generative model, normalizing flows, cannot take advantage of this. Normalizing flows are based on successive variable transformations that are, by design, incapable of learning lower-dimensional representations. In this paper we introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions. NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations. We further employ an additive noise model to account for deviations from the manifold and identify a stochastic inverse of the generative process. Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
Introduction
Normalizing flows <|cite_start|> (Reference: Variational Inference with Normalizing Flows: The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.) <|cite_end|> <|cite_start|> (Reference: Piecewise Normalizing Flows: Normalizing flows are an established approach for modelling complex probability densities through invertible transformations from a base distribution. However, the accuracy with which the target distribution can be captured by the normalizing flow is strongly influenced by the topology of the base distribution. A mismatch between the topology of the target and the base can result in a poor performance, as is typically the case for multi-modal problems. A number of different works have attempted to modify the topology of the base distribution to better match the target, either through the use of Gaussian Mixture Models (Izmailov et al., 2020; Ardizzone et al., 2020; Hagemann&Neumayer, 2021) or learned accept/reject sampling (Stimper et al., 2022). We introduce piecewise normalizing flows which divide the target distribution into clusters, with topologies that better match the standard normal base distribution, and train a series of flows to model complex multi-modal targets. We demonstrate the performance of the piecewise flows using some standard benchmarks and compare the accuracy of the flows to the approach taken in Stimper et al. (2022) for modelling multi-modal distributions. We find that our approach consistently outperforms the approach in Stimper et al. (2022) with a higher emulation accuracy on the standard benchmarks.) <|cite_end|> are a popular tool in probabilistic modeling, however they lack the ability to learn low-dimensional representations of the data and decouple noise from the representations. This could be a contributing factor to why normalizing flows lag behind other methods at generating high quality images <|cite_start|> (Reference: Glow: Generative Flow with Invertible 1x1 Convolutions: Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow) <|cite_end|> <|cite_start|> (Reference: Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design: Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. Our implementation is available at https://github.com/aravindsrinivas/flowpp) <|cite_end|> <|cite_start|> (Reference: Generating Diverse High-Fidelity Images with VQ-VAE-2: We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.) <|cite_end|> <|cite_start|> (Reference: Analyzing and improving a wireless network: This report is about how a site survey is carried out using software and how a site survey can collect the information needed to improve the existing wireless networks. The report provides both general information about wireless networks and practical examples with authentic measurements in the corporate environment.Problems can arise in a wireless network if the planning is not properly done. A measurement have been made in second floor, where company X is located.The results of the survey is presented in images that show the strength of Company X wireless networks, on the second floor. Images also shows noise, overlapping channels and how the signal strength related to these.) <|cite_end|> <|cite_start|> (Reference: Generative Modeling by Estimating Gradients of the Data Distribution: We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments.) <|cite_end|>.
The manifold hypothesis <|cite_start|> (Reference: Testing the Manifold Hypothesis: The hypothesis that high dimensional data tend to lie in the vicinity of a low dimensional manifold is the basis of manifold learning. The goal of this paper is to develop an algorithm (with accompanying complexity guarantees) for fitting a manifold to an unknown probability distribution supported in a separable Hilbert space, only using i.i.d samples from that distribution. More precisely, our setting is the following. Suppose that data are drawn independently at random from a probability distribution $P$ supported on the unit ball of a separable Hilbert space $H$. Let $G(d, V, \tau)$ be the set of submanifolds of the unit ball of $H$ whose volume is at most $V$ and reach (which is the supremum of all $r$ such that any point at a distance less than $r$ has a unique nearest point on the manifold) is at least $\tau$. Let $L(M, P)$ denote mean-squared distance of a random point from the probability distribution $P$ to $M$.
We obtain an algorithm that tests the manifold hypothesis in the following sense.
The algorithm takes i.i.d random samples from $P$ as input, and determines which of the following two is true (at least one must be):
(a) There exists $M \in G(d, CV, \frac{\tau}{C})$ such that $L(M, P) \leq C \epsilon.$
(b) There exists no $M \in G(d, V/C, C\tau)$ such that $L(M, P) \leq \frac{\epsilon}{C}.$
The answer is correct with probability at least $1-\delta$.) <|cite_end|> conjectures that real-world images, such as faces, lie on a low-dimensional manifold in a high-dimensional space. Consequently, one can expect that normalizing flows may not be able to properly represent data that satisfies the manifold hypothesis.
The simplest method of obtaining a low-dimensional representation is by learning to map a lower dimensional vector to the data. The image of such a transformation will be a manifold in the data space <|cite_start|> (Reference: Multivariate Calculus: ) <|cite_end|>. If the transformation is sufficiently expressive and the dimensionality of its domain matches that of the conjectured manifold, then the transformation may be able to learn the data manifold. However if the transformation is bijective and the dimensionality of its domain is too large, it can at best learn a superset of the data manifold, and as a result map to points that are not data. Normalizing flows use bijective functions that preserve dimension, so they are fundamentally incapable of perfectly modeling data that satisfies the manifold hypothesis.
Normalizing flows employ invertible functions to transform random variables <|cite_start|> (Reference: Variational Inference with Normalizing Flows: The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.) <|cite_end|>. It is the invertibility requirement that forces its input and output to have the same dimension. While this construction does not allow for low-dimensional representations, it affords exact log-likelihood computation. Log-likelihood-based inference is predicated on the ability to compute log-likelihood <|cite_start|> (Reference: Statistical inference: What is Statistics? Opinions vary. In fact, there is a continuous spectrum of attitudes toward statistics ranging from pure theoreticians, proving asymptotic efficiency and searching for most powerful tests, to wild practitioners, blindly reporting p-values and claiming statistical significance for scientifically insignificant results. In these notes statistics is viewed as a branch of mathematical engineering, that studies ways of extracting reliable information from limited data for learning, prediction, and decision making in the presence of uncertainty. The main goals of these notes are: (1) provide a logical introduction to statistical inference, (2) develop statistical thinking and intuitive feel for the subject, and (3) introduce the most fundamental ideas, concepts, and methods of statistics, explain how and why they work, and when they don’t. These lecture notes are based on the courses the author taught at the University of Southern California in 2012 and 2013, and at the California Institute of Technology in 2016 and 2017.) <|cite_end|>, but this is rarely known exactly in deep machine learning models. For this reason, we would prefer to use low-dimensional representations to improve normalizing flows rather than seek a different method.
In this paper we introduce a generalization of normalizing flows which we call \textit{noisy injective flows}. Noisy injective flows use injective functions to map across dimensions and a noise model to account for deviations from its learned manifold. We show that this construction is a natural extension of normalizing flows that retains a form of invertibility while also decoupling its representation of data from extraneous noise. We also provide an instance of noisy injective flows that can be incorporated into existing normalizing flow models to improve sample clarity without degrading log-likelihood values. Our contributions are summarized as follows:
\begin{itemize}
\item We show that noisy injective flows are a generalization of normalizing flows that can learn a low-dimensional representation of data with a principled approach to account for deviations from the learned manifold.
\item We introduce a stochastic inverse of the generative process for inference and training.
\item We show that noisy injective flows have a simple mechanism to control how far samples deviate from the learned manifold. We showcase the flexibility of this mechanism when applied to image generation. A particular benefit of NIF is that we can vary the noise-model in a post-hoc manner to obtain crisper images and achieve higher metric based performance -- in terms of Fréchet Inception Distance <|cite_start|> (Reference: GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium: Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the "Fr\'echet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.) <|cite_end|> and bits per dimension -- than normalizing flows.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{nif_samples_figure_1.pdf}
\caption{Generated faces from our method with a latent state size of 128.}
\vspace{-0.25in}
\label{fig:nif fig 1 sample}
\end{figure}
Related Work
\label{section:related work}
The bulk of normalizing flows <|cite_start|> (Reference: Variational Inference with Normalizing Flows: The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.) <|cite_end|> research focuses on developing more powerful invertible layers <|cite_start|> (Reference: Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design: Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. Our implementation is available at https://github.com/aravindsrinivas/flowpp) <|cite_end|>. We, on the other hand, focus on improving the capabilities of normalizing flows to work across dimensions. <|cite_start|> (Reference: Piecewise Normalizing Flows: Normalizing flows are an established approach for modelling complex probability densities through invertible transformations from a base distribution. However, the accuracy with which the target distribution can be captured by the normalizing flow is strongly influenced by the topology of the base distribution. A mismatch between the topology of the target and the base can result in a poor performance, as is typically the case for multi-modal problems. A number of different works have attempted to modify the topology of the base distribution to better match the target, either through the use of Gaussian Mixture Models (Izmailov et al., 2020; Ardizzone et al., 2020; Hagemann&Neumayer, 2021) or learned accept/reject sampling (Stimper et al., 2022). We introduce piecewise normalizing flows which divide the target distribution into clusters, with topologies that better match the standard normal base distribution, and train a series of flows to model complex multi-modal targets. We demonstrate the performance of the piecewise flows using some standard benchmarks and compare the accuracy of the flows to the approach taken in Stimper et al. (2022) for modelling multi-modal distributions. We find that our approach consistently outperforms the approach in Stimper et al. (2022) with a higher emulation accuracy on the standard benchmarks.) <|cite_end|> were the first to apply normalizing flows across dimensions. Their problem was constrained to when data was known to lie exactly on a manifold whose form in known analytically, but they did not investigate how to learn the manifold, nor how to treat data that is not on the manifold. The recent work of <|cite_start|> (Reference: Flows for simultaneous manifold learning and density estimation: We introduce manifold-learning flows (M-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent datasets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates. In a range of experiments we demonstrate how M-flows learn the data manifold and allow for better inference than standard flows in the ambient data space.) <|cite_end|> learns this manifold using a deterministic treatment of data that lies off the manifold and a term to penalize its distance from data, but does not provide a unified objective to perform maximum likelihood learning. <|cite_start|> (Reference: Adversarially Regularized Autoencoders: Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.) <|cite_end|> introduced a similar idea based on injective flows, using a novel lower bound on the injective change of variable formula for maximum likelihood training, however the authors note that their method does not work with data that does not lie exactly on the learned manifold.
Our work has similar features to variational autoencoders <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|> with Gaussian decoders. The generative process we present can be seen as a special case of a variational autoencoder, but our use of injective functions, and our definition of a stochastic inverse makes our method resemble normalizing flows more closely. <|cite_start|> (Reference: Diagnosing and Enhancing VAE Models: Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoder/decoder assumptions reduce the effectiveness of VAEs in generating realistic samples. In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true. We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture. A shorter version of this work will appear in the ICLR 2019 conference proceedings (Dai and Wipf, 2019). The code for our model is available at this https URL TwoStageVAE.) <|cite_end|> consider the converse problem of ours -- how to use a method designed to model density around a manifold (VAEs with Gaussian decoders) for maximum likelihood learning, when data is exactly on a manifold. We consider how to take an algorithm designed to learn density on a manifold (injective flows) for maximum likelihood learning when data lies around a manifold. The algorithm they describe in their paper uses a 2-stage VAE that first learns the manifold and then learns an aggregate posterior that can be used for sampling whereas our model requires no such scheme. We do not compare against VAEs because we focus specifically on improving normalizing flows by incorporating low-dimensional representations. <|paper_end|> | [
"<|reference_start|> Piecewise Normalizing Flows: Normalizing flows are an established approach for modelling complex probability densities through invertible transformations from a base distribution. However, the accuracy with which the target distribution can be captured by the normalizing flow is strongly influenced by the topology of the base distribution. A mismatch between the topology of the target and the base can result in a poor performance, as is typically the case for multi-modal problems. A number of different works have attempted to modify the topology of the base distribution to better match the target, either through the use of Gaussian Mixture Models (Izmailov et al., 2020; Ardizzone et al., 2020; Hagemann&Neumayer, 2021) or learned accept/reject sampling (Stimper et al., 2022). We introduce piecewise normalizing flows which divide the target distribution into clusters, with topologies that better match the standard normal base distribution, and train a series of flows to model complex multi-modal targets. We demonstrate the performance of the piecewise flows using some standard benchmarks and compare the accuracy of the flows to the approach taken in Stimper et al. (2022) for modelling multi-modal distributions. We find that our approach consistently outperforms the approach in Stimper et al. (2022) with a higher emulation accuracy on the standard benchmarks. <|reference_end|>",
"<|reference_start|> Analyzing and improving a wireless network: This report is about how a site survey is carried out using software and how a site survey can collect the information needed to improve the existing wireless networks. The report provides both general information about wireless networks and practical examples with authentic measurements in the corporate environment.Problems can arise in a wireless network if the planning is not properly done. A measurement have been made in second floor, where company X is located.The results of the survey is presented in images that show the strength of Company X wireless networks, on the second floor. Images also shows noise, overlapping channels and how the signal strength related to these. <|reference_end|>",
"<|reference_start|> Statistical inference: What is Statistics? Opinions vary. In fact, there is a continuous spectrum of attitudes toward statistics ranging from pure theoreticians, proving asymptotic efficiency and searching for most powerful tests, to wild practitioners, blindly reporting p-values and claiming statistical significance for scientifically insignificant results. In these notes statistics is viewed as a branch of mathematical engineering, that studies ways of extracting reliable information from limited data for learning, prediction, and decision making in the presence of uncertainty. The main goals of these notes are: (1) provide a logical introduction to statistical inference, (2) develop statistical thinking and intuitive feel for the subject, and (3) introduce the most fundamental ideas, concepts, and methods of statistics, explain how and why they work, and when they don’t. These lecture notes are based on the courses the author taught at the University of Southern California in 2012 and 2013, and at the California Institute of Technology in 2016 and 2017. <|reference_end|>",
"<|reference_start|> Diagnosing and Enhancing VAE Models: Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoder/decoder assumptions reduce the effectiveness of VAEs in generating realistic samples. In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true. We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture. A shorter version of this work will appear in the ICLR 2019 conference proceedings (Dai and Wipf, 2019). The code for our model is available at this https URL TwoStageVAE. <|reference_end|>"
] | [
1,
5,
10,
18
] | {"<|multi_cite_1_1|>": "arxiv-78108", "<|multi_cite_1_2|>": "ss-989498", "<|multi_cite_10_1|>": "arxiv-165197", "<|multi_cite_10_2|>": "arxiv-189868", "<|multi_cite_10_3|>": "arxiv-207475", "<|multi_cite_10_4|>": "ss-781136", "<|multi_cite_10_5|>": "arxiv-214190", "<|cite_2|>": "ss-2205891", "<|cite_3|>": "ss-2205892", "<|cite_4|>": "arxiv-78108", "<|cite_5|>": "ss-1256303", "<|cite_6|>": "arxiv-127709", "<|cite_7|>": "arxiv-78108", "<|cite_8|>": "arxiv-189868", "<|cite_11|>": "ss-989498", "<|cite_12|>": "arxiv-256510", "<|cite_13|>": "ss-2205893", "<|cite_9|>": "arxiv-54350", "<|cite_14|>": "ss-944090"} |
2208.09568 | <|paper_start|> Title: Probabilities of Causation with Nonbinary Treatment and Effect
Abstract: Probabilities of Causation with Nonbinary Treatment and Effect: This paper deals with the problem of estimating the probabilities of causation when treatment and effect are not binary. Tian and Pearl derived sharp bounds for the probability of necessity and sufficiency (PNS), the probability of sufficiency (PS), and the probability of necessity (PN) using experimental and observational data. In this paper, we provide theoretical bounds for all types of probabilities of causation to multivalued treatments and effects. We further discuss examples where our bounds guide practical decisions and use simulation studies to evaluate how informative the bounds are for various combinations of data.
Introduction
In many areas of industry, marketing, and health science, the
probabilities of causation are widely used to solve decision-making problems. For example, Li and Pearl <|cite_start|> (Reference: Unit selection based on counterfactual logic: The unit selection problem aims to identify a set of individuals who are most likely to exhibit a desired mode of behavior, which is defined in counterfactual terms. A typical example is that of selecting individuals who would respond one way if encouraged and a different way if not encouraged. Unlike previous works on this problem, which rely on ad-hoc heuristics, we approach this problem formally, using counterfactual logic, to properly capture the nature of the desired behavior. This formalism enables us to derive an informative selection criterion which integrates experimental and observational data. We demonstrate the superiority of this criterion over A/B-test-based approaches.) <|cite_end|> proposed the “benefit function”, which is the payoff/cost associated with selecting an individual with given characteristics to identify a set of individuals who are most likely to exhibit a desired mode of behavior. In Li and Pearl's paper, the benefit function is a linear combination of the probabilities of causation with binary treatment and effect. For another example, Mueller and Pearl <|cite_start|> (Reference: Personalized Decision Making -- A Conceptual Introduction: Personalized decision making targets the behavior of a specific individual, while population-based decision making concerns a sub-population resembling that individual. This paper clarifies the distinction between the two and explains why the former leads to more informed decisions. We further show that by combining experimental and observational studies we can obtain valuable information about individual behavior and, consequently, improve decisions over those obtained from experimental studies alone.) <|cite_end|> demonstrated that the probabilities of causation should be considered in personalized decision-making.
Consider the following motivating scenario: an elderly patient with cancer is faced with the choice of treatment to pursue. The options include surgery, chemotherapy, and radiation. The outcomes include ineffective, cured, and death. Given that the elderly patient has a high risk of death from cancer surgery, the patient wants to know the probability that he would be cured if he chose radiation, would die if he chose surgery, and nothing would change if he chose chemotherapy. Let $X$ denotes the treatment, where $x_1$ denotes surgery, $x_2$ denotes chemotherapy, and $x_3$ denotes radiation. Let $Y$ denotes the outcome, where $y_1$ denotes ineffective, $y_2$ denotes cured, and $y_3$ denotes death. The probability that the patient desires is the probability of causation, $P({y_3}_{x_1},{y_1}_{x_2},{y_2}_{x_3})$.
Pearl <|cite_start|> (Reference: Probabilities Of Causation: Three Counterfactual Interpretations And Their Identification: ) <|cite_end|> first defined three binary probabilities of causation (i.e., PNS, PN, and PS) using SCM <|cite_start|> (Reference: An Axiomatic Characterization of Causal Counterfactuals: ) <|cite_end|> <|cite_start|> (Reference: Axiomatizing Causal Reasoning: Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl. In addition, the complexity of the decision procedures is examined for all the languages and classes of models considered.) <|cite_end|> <|cite_start|> (Reference: {Causality: In philosophy intuition is used in reasoning as a test-bed for the conclusions of philosophical arguments. Logic, rhetoric and intuition are the main conceptual tools in philosophical reasoning. Intuition often acts as a sort of empirical verification of the acceptability of a particular thesis. Rather like a sort of empirical test or an experimental control, to use an analogy with what happens in natural science. The basis for this method is that intuition is generalisable, or in other words, broadly speaking, it can be shared at a universal level. Moreover, intuition must have foundational validity, a primary capacity for justification that is greater than any other alternative information. It should be greater than the reference to data from the cultural and religious tradition, for example, or the recourse to the theses of classical authors. Likewise it should be able to withstand the hypotheses and empirical confirmations of scientific and technical knowledge. Experimental philosophy appears to question intuition’s alleged foundational and universal nature. Intuition is a psychological phenomenon linked to what is conventionally known, according to some authors (Stanovich 1999; see Chap. 9 of Viale 2012), but not to others (Gigerenzer 2007), as System 1 of mind. Contrary to System 2, which is rational and explicit, this system is implicit and highly contextdependent. It is permeable to the influences of emotional variables derived from the cultural and environmental context. Seen in this way, it would seem difficult to affirm the thesis of the universality of human intuition. The underlying hypothesis derived from the findings of cognitive science argues the contrary: namely that intuition is local and contingent, changing in relation not only to cultural context but also to individual psychological variables, like personality traits or emotional and affective contingencies. Experimental philosophy has explored the universality) <|cite_end|>. Tian and Pearl <|cite_start|> (Reference: Probabilities of Causation: Bounds and Identification: This paper deals with the problem of estimating the probability that one event was a cause of another in a given scenario. Using structural-semantical definitions of the probabilities of necessary or sufficient causation (or both), we show how to optimally bound these quantities from data obtained in experimental and observational studies, making minimal assumptions concerning the data-generating process. In particular, we strengthen the results of Pearl (1999) by weakening the data-generation assumptions and deriving theoretically sharp bounds on the probabilities of causation. These results delineate precisely how empirical data can be used both in settling questions of attribution and in solving attribution-related problems of decision making.) <|cite_end|> then used observational and experimental data to bound those three probabilities of causation. Li and Pearl <|cite_start|> (Reference: Unit selection based on counterfactual logic: The unit selection problem aims to identify a set of individuals who are most likely to exhibit a desired mode of behavior, which is defined in counterfactual terms. A typical example is that of selecting individuals who would respond one way if encouraged and a different way if not encouraged. Unlike previous works on this problem, which rely on ad-hoc heuristics, we approach this problem formally, using counterfactual logic, to properly capture the nature of the desired behavior. This formalism enables us to derive an informative selection criterion which integrates experimental and observational data. We demonstrate the superiority of this criterion over A/B-test-based approaches.) <|cite_end|> <|cite_start|> (Reference: Unit Selection with Causal Diagram: The unit selection problem aims to identify a set of individuals who are most likely to exhibit a desired mode of behavior, for example, selecting individuals who would respond one way if encouraged and a different way if not encouraged. Using a combination of experimental and observational data, Li and Pearl derived tight bounds on the "benefit function" - the payoff/cost associated with selecting an individual with given characteristics. This paper shows that these bounds can be narrowed significantly (enough to change decisions) when structural information is available in the form of a causal model. We address the problem of estimating the benefit function using observational and experimental data when specific graphical criteria are assumed to hold.) <|cite_end|> provided formal proof of those bounds. Mueller, Li, and Pearl <|cite_start|> (Reference: Causes of Effects: Learning individual responses from population data: The problem of individualization is recognized as crucial in almost every field. Identifying causes of effects in specific events is likewise essential for accurate decision making. However, such estimates invoke counterfactual relationships, and are therefore indeterminable from population data. For example, the probability of benefiting from a treatment concerns an individual having a favorable outcome if treated and an unfavorable outcome if untreated. Experiments conditioning on fine-grained features are fundamentally inadequate because we can't test both possibilities for an individual. Tian and Pearl provided bounds on this and other probabilities of causation using a combination of experimental and observational data. Even though those bounds were proven tight, narrower bounds, sometimes significantly so, can be achieved when structural information is available in the form of a causal model. This has the power to solve central problems, such as explainable AI, legal responsibility, and personalized medicine, all of which demand counterfactual logic. We analyze and expand on existing research by applying bounds to the probability of necessity and sufficiency (PNS) along with graphical criteria and practical applications.) <|cite_end|> recently proposed using covariate information and the causal structure to narrow the bounds of the probability of necessity and sufficiency. Dawid et al. <|cite_start|> (Reference: The Probability of Causation: Many legal cases require decisions about causality, responsibility or blame, and these may be based on statistical data. However, causal inferences from such data are beset by subtle conceptual and practical difficulties, and in general it is, at best, possible to identify the "probability of causation" as lying between certain empirically informed limits. These limits can be refined and improved if we can obtain additional information, from statistical or scientific data, relating to the internal workings of the causal processes. In this paper we review and extend recent work in this area, where additional information may be available on covariate and/or mediating variables.) <|cite_end|> also proposed using covariate information to narrow the bounds of the probability of necessity.
All the above-mentioned studies are restricted to binary treatment and effect, limiting the application of probabilities of causation. Zhang, Tian, and Bareinboim <|cite_start|> (Reference: Partial Counterfactual Identification from Observational and Experimental Data: This paper investigates the problem of bounding counterfactual queries from an arbitrary collection of observational and experimental distributions and qualitative knowledge about the underlying data-generating model represented in the form of a causal diagram. We show that all counterfactual distributions in an arbitrary structural causal model (SCM) could be generated by a canonical family of SCMs with the same causal diagram where unobserved (exogenous) variables are discrete with a finite domain. Utilizing the canonical SCMs, we translate the problem of bounding counterfactuals into that of polynomial programming whose solution provides optimal bounds for the counterfactual query. Solving such polynomial programs is in general computationally expensive. We therefore develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data. Our algorithms are validated extensively on synthetic and real-world datasets.) <|cite_end|>, as well as Li and Pearl <|cite_start|> (Reference: Bounds on Causal Effects and Application to High Dimensional Data: This paper addresses the problem of estimating causal effects when adjustment variables in the back-door or front-door criterion are partially observed. For such scenarios, we derive bounds on the causal effects by solving two non-linear optimization problems, and demonstrate that the bounds are sufficient. Using this optimization method, we propose a framework for dimensionality reduction that allows one to trade bias for estimation power, and demonstrate its performance using simulation studies.) <|cite_end|>, proposed nonlinear programming-based solutions to compute the bounds of nonbinary probabilities of causation numerically. However, the theoretical foundation of nonbinary probabilities of causation is still required, not only because numerical methods are limited by computational power but also because people are interested in the theoretical foundation due to further development and analysis. In this paper, we will introduce the theoretical bounds of any probabilities of causation defined using SCM without restricting them to binary treatment and effect. <|paper_end|> | [
"<|reference_start|> Personalized Decision Making -- A Conceptual Introduction: Personalized decision making targets the behavior of a specific individual, while population-based decision making concerns a sub-population resembling that individual. This paper clarifies the distinction between the two and explains why the former leads to more informed decisions. We further show that by combining experimental and observational studies we can obtain valuable information about individual behavior and, consequently, improve decisions over those obtained from experimental studies alone. <|reference_end|>",
"<|reference_start|> Probabilities Of Causation: Three Counterfactual Interpretations And Their Identification: <|reference_end|>",
"<|reference_start|> An Axiomatic Characterization of Causal Counterfactuals: <|reference_end|>",
"<|reference_start|> Bounds on Causal Effects and Application to High Dimensional Data: This paper addresses the problem of estimating causal effects when adjustment variables in the back-door or front-door criterion are partially observed. For such scenarios, we derive bounds on the causal effects by solving two non-linear optimization problems, and demonstrate that the bounds are sufficient. Using this optimization method, we propose a framework for dimensionality reduction that allows one to trade bias for estimation power, and demonstrate its performance using simulation studies. <|reference_end|>"
] | [
1,
2,
3,
12
] | {"<|cite_1|>": "ss-1522990", "<|cite_2|>": "arxiv-441212", "<|cite_3|>": "ss-904564", "<|multi_cite_4_1|>": "ss-1399664", "<|multi_cite_4_2|>": "arxiv-64527", "<|multi_cite_4_3|>": "ss-1094665", "<|cite_5|>": "arxiv-40459", "<|multi_cite_6_1|>": "ss-1522990", "<|multi_cite_6_2|>": "arxiv-367308", "<|cite_7|>": "arxiv-337437", "<|cite_8|>": "ss-1541954", "<|cite_9|>": "arxiv-373328", "<|cite_10|>": "arxiv-350345"} |
2202.07904 | <|paper_start|> Title: Blockchain Security when Messages are Lost
Abstract: Blockchain Security when Messages are Lost: Security analyses for consensus protocols in blockchain research have primarily focused on the synchronous model, where point-to-point communication delays are upper bounded by a known finite constant. These models are unrealistic in noisy settings, where messages may be lost (i.e. incur infinite delay). In this work, we study the impact of message losses on the security of the proof-of-work longest-chain protocol. We introduce a new communication model to capture the impact of message loss called the $0-\infty$ model, and derive a region of tolerable adversarial power under which the consensus protocol is secure. The guarantees are derived as a simple bound for the probability that a transaction violates desired security properties. Specifically, we show that this violation probability decays almost exponentially in the security parameter. Our approach involves constructing combinatorial objects from blocktrees, and identifying random variables associated with them that are amenable to analysis. This approach improves existing bounds and extends the known regime for tolerable adversarial threshold in settings where messages may be lost.
Introduction
Blockchain is the data structure used by peers (miners) in a peer-to-peer network to maintain a common ledger in a decentralized manner. The consistency of this ledger is ensured through consensus protocols such as the longest-chain protocol. Following this protocol, an honest miner groups transactions into a block and appends its block to the longest chain in its view, before broadcasting the new blockchain to all other peers. Further, the system may have adversarial users that deviate from the protocol arbitrarily. Despite adversarial users attempting to disrupt the system and peer-to-peer communication incurring message delays, it is desirable that the parties following the protocol agree on a consistent ledger.
Blockchain security has been studied under various consensus protocols (see <|cite_start|> (Reference: SoK: Consensus in the Age of Blockchains: The core technical component of blockchains is consensus: how to reach agreement among a distributed network of nodes. A plethora of blockchain consensus protocols have been proposed---ranging from new designs, to novel modifications and extensions of consensus protocols from the classical distributed systems literature. The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematization of knowledge of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: (i) protocols based on proof-of-work; (ii) proof-of-X protocols that replace proof-of-work with more energy-efficient alternatives; and (iii) hybrid protocols that are compositions or variations of classical consensus protocols. This survey is guided by a systematization framework we develop, to highlight the various building blocks of blockchain consensus design, along with a discussion on their security and performance properties. We identify research gaps and insights for the community to consider in future research endeavours.) <|cite_end|> <|cite_start|> (Reference: SoK: A Consensus Taxonomy in the Blockchain Era: ) <|cite_end|> for a survey). Of these, the longest-chain protocol is of great interest, due its heavy use in modern blockchain implementations. The longest-chain protocol has been modeled under various assumptions: for example, discrete time is used in <|cite_start|> (Reference: Tight consistency bounds for bitcoin: We establish the optimal security threshold for the Bitcoin protocol in terms of adversarial hashing power, honest hashing power, and network delays. Specifically, we prove that the protocol is secure if [ra < 1/Δ0 + 1/rh,,] where rh is the expected number of honest proof-of-work successes in unit time, ra is the expected number of adversarial successes, and no message is delayed by more than Δ0 time units. In this regime, the protocol guarantees consistency and liveness with exponentially decaying failure probabilities. Outside this region, the simple private chain attack prevents consensus. Our analysis immediately applies to any Nakamoto-style proof-of-work protocol; in the full version of this paper we also present the adaptations needed to apply it in the proof-of-stake setting, establishing a similar threshold there.) <|cite_end|> <|cite_start|> (Reference: The Combinatorics of the Longest-Chain Rule: Linear Consistency for Proof-of-Stake Blockchains: ,) <|cite_end|>, and continuous time dynamics is used in <|cite_start|> (Reference: Close latency-security trade-off for the Nakamoto consensus: Bitcoin is a peer-to-peer electronic cash system invented by Nakamoto in 2008. While it has attracted much research interest, its exact latency and security properties remain open. Existing analyses provide security and latency (or confirmation time) guarantees that are too loose for practical use. In fact the best known upper bounds are several orders of magnitude larger than a lower bound due to a well-known private-mining attack. This paper describes a continuous-time model for blockchains and develops a rigorous analysis that yields close upper and lower bounds for the latency-security trade-off. For example, when the adversary controls 10% of the total mining power and the block propagation delays are within 10 seconds, a Bitcoin block is secured with less than 10-3 error probability if it is confirmed after four hours, or with less than 10-9 error probability if confirmed after ten hours. These confirmation times are about two hours away from their corresponding lower bounds. To establish such close bounds, the blockchain security question is reduced to a race between the Poisson adversarial mining process and a renewal process formed by a certain species of honest blocks. The moment generation functions of relevant renewal times are derived in closed form. The general formulas from the analysis are then applied to study the latency-security trade-off of several well-known proof-of-work longest-chain cryptocurrencies. Guidance is also provided on how to set parameters for different purposes.) <|cite_end|> <|cite_start|> (Reference: Analysis of Nakamoto Consensus.: The famed Bitcoin white paper presented an unconventional (at the time) Byzantine fault tolerant consensus algorithm that is now known as the Nakamoto consensus [6]. Nakamoto consensus centers around the proofof-work (PoW) mechanism and the “longest-chain-win” rule. It is extremely simple and can be described very succinctly: at any time, an honest node adopts the longest PoW chain to its knowledge and attempts to mine a new block that extends this longest chain; a block is committed when buried sufficiently deep in the chain. Such a simple algorithm deserves a simple analysis, which is what this paper aims to provide.) <|cite_end|>. Further, the protocol has also been studied for a variety of leader election mechanisms in the consensus protocol. For instance, <|cite_start|> (Reference: Analysis of the Blockchain Protocol in Asynchronous Networks: ) <|cite_end|> <|cite_start|> (Reference: Analysis of Nakamoto Consensus.: The famed Bitcoin white paper presented an unconventional (at the time) Byzantine fault tolerant consensus algorithm that is now known as the Nakamoto consensus [6]. Nakamoto consensus centers around the proofof-work (PoW) mechanism and the “longest-chain-win” rule. It is extremely simple and can be described very succinctly: at any time, an honest node adopts the longest PoW chain to its knowledge and attempts to mine a new block that extends this longest chain; a block is committed when buried sufficiently deep in the chain. Such a simple algorithm deserves a simple analysis, which is what this paper aims to provide.) <|cite_end|> <|cite_start|> (Reference: How Does Nakamoto Set His Clock? Full Analysis of Nakamoto Consensus in Bounded Delay Networks: . Nakamoto consensus, arguably the most exciting development in distributed computing in the last few years, is in a sense a recasting of the traditional state-machine-replication problem in an unauthenticated setting, where furthermore parties come and go without warning. The protocol relies on a cryptographic primitive known as proof of work (PoW) which is used to throttle message passing. Importantly, the PoW difficulty level is appropriately adjusted throughout the course of the protocol execution relying on the blockchain’s timekeeping ability. While the original formulation was only accompanied by rudimentary analysis, significant and steady progress has been made in abstracting the protocol’s properties and providing a formal analysis under various restrictions and protocol simplifications. Still, a full analysis of the protocol that includes its target recalculation and, notably, the timestamp adjustment mechanism —specifically, the protocol allows incoming block timestamps in the near future, as determined by a protocol parameter, and rejects blocks that have a timestamp in the past of the median time of a specific number of blocks on-chain (namely, 11)— which equip it to operate in its intended setting of bounded communication delays, imperfect clocks and dynamic participation, has remained open. The gap is that Nakamoto’s protocol fundamentally depends on the blockchain itself to be a consistent timekeeper that should advance roughly on par with real time. In order to tackle this question we introduce a new analytical tool that we call hot-hand executions , which capture the regular occurrence of high concentration of honestly generated blocks, and correspondingly put forth and prove a new blockchain property called concentrated chain quality , which may be of independent interest. Utilizing these tools and techniques we demonstrate that Nakamoto’s protocol achieves, under suitable conditions, safety, liveness as well as (consistent) timekeeping.) <|cite_end|> assume the proof-of-work mechanism, whereas <|cite_start|> (Reference: The Sleepy Model of Consensus: ) <|cite_end|> <|cite_start|> (Reference: Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol: ) <|cite_end|> <|cite_start|> (Reference: iChing: A Scalable Proof-of-Stake Blockchain in the Open Setting (or, How to Mimic Nakamoto's Design via Proof-of-Stake): ) <|cite_end|> assume a proof-of-stake mechanism. All these works establish security of the longest-chain protocol for the synchronous communication model, where communication delays are upper bounded by a known finite constant. A common theme among these results is that in the synchronous delay model, the longest-chain protocol is `secure' under sufficient honest representation, with high probability.
In this work, we analyze the impact of message losses on the security of the longest-chain protocol following proof-of-work leader election, by introducing and analyzing an appropriate communication network model. We motivate this by reviewing some existing communication models in the literature and the known security guarantees associated with them.
\subsection{Related Work}
The underlying communication network can delay the successful delivery of peer-to-peer message broadcasts. Popular blockchains such as Bitcoin use the Internet as their communication network. Since this communication is subject to delay, it is natural to model the delays incurred by each block, and study the impact of delay on the security of the longest-chain protocol.
Let \(0 \leq i < j\). Let \(b_i\) represent the \(i\)-th mined honest block. Let \( \delay{i}{j}\) denote the time taken for block \(b_i\) to reach the miner of block \(b_j\), and let \(\beta\) represent the fraction of adversarial computational power in the system. Finally, let \(\lambda\) be the rate at which blocks are mined in the system. Various descriptions of \(\delay{i}{j}\) lead to different communication network models:
\paragraph{Instantaneous Model} The original white-paper by Satoshi Nakamoto <|cite_start|> (Reference: Bitcoin: A Peer-to-Peer electronic cash system: 原文作者:中本聪(Satoshi Nakamoto) 翻译:Bitcoinblogger.com 独家赞助 作者邮箱:[email protected] www.bitcoin.org [摘要]:本文提出了一种完全通过点对点技术实现的电子现金系统,它使得在线支付 能够直接由一方发起并支付给另外一方,中间不需要通过任何的金融机构。虽然数 字签名(Digital signatures)部分解决了这个问题,但是如果仍然需要第三方的支持 才能防止双重支付(double-spending)的话,那么这种系统也就失去了存在的价值。 我们(we)在此提出一种解决方案,使现金系统在点对点的环境下运行,并防止双重支 付问题。该网络通过随机散列(hashing)对全部交易加上时间戳(timestamps), 将它们合并入一个不断延伸的基于随机散列的工作量证明(proof-of-work)的链条作 为交易记录,除非重新完成全部的工作量证明,形成的交易记录将不可更改。最长 的链条不仅将作为被观察到的事件序列(sequence)的证明,而且被看做是来自 CPU 计算能力最大的池(pool)。只要大多数的 CPU 计算能力都没有打算合作起来对全 网进行攻击,那么诚实的节点将会生成最长的、超过攻击者的链条。这个系统本身 需要的基础设施非常少。信息尽最大努力在全网传播即可,节点(nodes)可以随时离 开和重新加入网络,并将最长的工作量证明链条作为在该节点离线期间发生的交易 的证明。) <|cite_end|> assumes an ideal communication channel, i.e. \( \delay{i}{j} = 0 \). In this model, the longest-chain protocol is provably secure when the honest computational power in the system exceeds the adversarial computational power, i.e. when \( \beta < 1 - \beta\), or equivalently, when \( \beta < 1/2\).
\paragraph{Synchronous Model} The model assumes a deterministic delay for each block that is upper bounded by a known constant \(\Delta\), i.e., \( \delay{i}{j} \leq \Delta < \infty \). This delay effectively reduces the growth rate of the chain held by an honest user. Even so, it has been proved <|cite_start|> (Reference: Tight consistency bounds for bitcoin: We establish the optimal security threshold for the Bitcoin protocol in terms of adversarial hashing power, honest hashing power, and network delays. Specifically, we prove that the protocol is secure if [ra < 1/Δ0 + 1/rh,,] where rh is the expected number of honest proof-of-work successes in unit time, ra is the expected number of adversarial successes, and no message is delayed by more than Δ0 time units. In this regime, the protocol guarantees consistency and liveness with exponentially decaying failure probabilities. Outside this region, the simple private chain attack prevents consensus. Our analysis immediately applies to any Nakamoto-style proof-of-work protocol; in the full version of this paper we also present the adaptations needed to apply it in the proof-of-stake setting, establishing a similar threshold there.) <|cite_end|> that the synchronous model is secure with high probability if and only if
\alns{
\beta < \frac{1}{1+\pbr{1-\beta}\lambda \Delta} \pbr{1 - \beta},
}
where \(\lambda\) is the total mining rate of the honest users.
\paragraph{Partially Synchronous Model}
The partially synchronous model assumes the existence of some unknown and adversarially chosen `Global Stabilization Time (\(\mathsf{GST}\))' such that the delays are unbounded before \(\mathsf{GST}\), but bounded after it <|cite_start|> (Reference: {Consensus in the Presence of Partial Synchrony: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound Δ on the time required for a message to be sent from one processor to another and a known fixed upper bound &PHgr; on the relative speeds of different processors. In an asynchronous system no fixed upper bounds Δ and &PHgr; exist. In one version of partial synchrony, fixed bounds Δ and &PHgr; exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds Δ and &PHgr;. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T, and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time.) <|cite_end|>. Therefore, at any time \(t\), the delay satisfies \(\delay{i}{j} \leq \Delta + \max\pbr{0, \mathsf{GST}-t}\). If certain conditions are met, the partially synchronous model is known to be secure with high probability after the Global Stabilization Time <|cite_start|> (Reference: Ebb-and-Flow Protocols: A Resolution of the Availability-Finality Dilemma: The CAP theorem says that no blockchain can be live under dynamic participation and safe under temporary network partitions. To resolve this availability-finality dilemma, we formulate a new class of flexible consensus protocols, ebb-and-flow protocols, which support a full dynamically available ledger in conjunction with a finalized prefix ledger. The finalized ledger falls behind the full ledger when the network partitions but catches up when the network heals. Gasper, the current candidate protocol for Ethereum 2.0's beacon chain, combines the finality gadget Casper FFG with the LMD GHOST fork choice rule and aims to achieve this property. However, we discovered an attack in the standard synchronous network model, highlighting a general difficulty with existing finality-gadget-based designs. We present a construction of provably secure ebb-and-flow protocols with optimal resilience. Nodes run an off-the-shelf dynamically available protocol, take snapshots of the growing available ledger, and input them into a separate off-the-shelf BFT protocol to finalize a prefix. We explore connections with flexible BFT and improve upon the state-of-the-art for that problem.) <|cite_end|>.
\paragraph{Sleepy Model}
The sleepy model considers the setting where miners may either be online or offline, and their participation status may change during the execution of the protocol <|cite_start|> (Reference: The Sleepy Model of Consensus: ) <|cite_end|>. Let \(h_i\) denote the miner of block \(b_i\). The incurred delay is thus
\alns{
\delay{i}{j} = \cas{ 0 & \text{\(h_j\) is awake when \(b_i\) is mined} \\ \infty & \text{\(h_j\) is asleep when \(b_i\) is mined}}.
}
Pass and Shi <|cite_start|> (Reference: The Sleepy Model of Consensus: ) <|cite_end|> showed that consensus can be achieved in the sleepy model with high probability, if a majority of the awake miners at any point in time are honest.
\paragraph{Random Delay Model}
The random delay model assumes that the point-to-point delays are independent and identically distributed, i.e. \(\delay{i}{j} \sim \mathsf{X}\), where \( \mathsf{X}\) is some known distribution. The longest-chain protocol is shown to be secure with high probability in the random delay model, if the delay distribution satisfies certain conditions and the adversarial representation in the system is below a certain threshold <|cite_start|> (Reference: The Longest-Chain Protocol Under Random Delays: In the field of distributed consensus and blockchains, the synchronous communication model assumes that all messages between honest parties are delayed at most by a known constant $\Delta$. Recent literature establishes that the longest-chain blockchain protocol is secure under the synchronous model. However, for a fixed mining rate, the security guarantees degrade with $\Delta$. We analyze the performance of the longest-chain protocol under the assumption that the communication delays are random, independent, and identically distributed. This communication model allows for distributions with unbounded support and is a strict generalization of the synchronous model. We provide safety and liveness guarantees with simple, explicit bounds on the failure probabilities. These bounds hold for infinite-horizon executions and decay exponentially with the security parameter. In particular, we show that the longest-chain protocol has good security guarantees when delays are sporadically large and possibly unbounded, which is reflective of real-world network conditions.) <|cite_end|>.
Except for the random delay model, none of the above models account for the possibility that point-to-point communication may incur infinite delay, i.e. messages may be lost at random. For instance, the sleepy model allows infinite delay for users that are offline, but does not account for noise in the communication process. In contrast, we introduce and analyze a new communication model to study the impact of lost messages on blockchain security.
\subsection{Contributions}
\paragraph{\(0 \dash \infty\) Model} We introduce the \(0\dash\infty\) model, where the delays are independent and identically distributed over the set \(\{0,\infty\}\). Specifically, for any \(i\), \(j \geq 0\) such that \(i < j\):
\alns{
\delay{i}{j} = \cas{ 0 & \text{with probability } 1-d \\ \infty & \text{with probability } d}.
}
This simple model postulates that a message sent point-to-point is either immediately received or permanently lost. This delay is independent for each user, and for each block. The modeling choice aligns with our objective of studying the effect of message losses.
We remark that the \(0 \dash \infty\) model is a special case of the i.i.d. random delay model introduced in <|cite_start|> (Reference: The Longest-Chain Protocol Under Random Delays: In the field of distributed consensus and blockchains, the synchronous communication model assumes that all messages between honest parties are delayed at most by a known constant $\Delta$. Recent literature establishes that the longest-chain blockchain protocol is secure under the synchronous model. However, for a fixed mining rate, the security guarantees degrade with $\Delta$. We analyze the performance of the longest-chain protocol under the assumption that the communication delays are random, independent, and identically distributed. This communication model allows for distributions with unbounded support and is a strict generalization of the synchronous model. We provide safety and liveness guarantees with simple, explicit bounds on the failure probabilities. These bounds hold for infinite-horizon executions and decay exponentially with the security parameter. In particular, we show that the longest-chain protocol has good security guarantees when delays are sporadically large and possibly unbounded, which is reflective of real-world network conditions.) <|cite_end|>, which identifies a region of tolerable adversarial power as a function of the delay distribution. Specifically, if \(d\) is the probability of message loss and \(\beta\) is the fraction of computational power in the system that is adversarial, it is shown that the \(0\dash\infty\) model is secure with high probability when \( \beta < \frac{1-2d}{2\pbr{1-d}}\). However, this characterization is not tight for the \(0 \dash \infty\) model, and the analysis in <|cite_start|> (Reference: The Longest-Chain Protocol Under Random Delays: In the field of distributed consensus and blockchains, the synchronous communication model assumes that all messages between honest parties are delayed at most by a known constant $\Delta$. Recent literature establishes that the longest-chain blockchain protocol is secure under the synchronous model. However, for a fixed mining rate, the security guarantees degrade with $\Delta$. We analyze the performance of the longest-chain protocol under the assumption that the communication delays are random, independent, and identically distributed. This communication model allows for distributions with unbounded support and is a strict generalization of the synchronous model. We provide safety and liveness guarantees with simple, explicit bounds on the failure probabilities. These bounds hold for infinite-horizon executions and decay exponentially with the security parameter. In particular, we show that the longest-chain protocol has good security guarantees when delays are sporadically large and possibly unbounded, which is reflective of real-world network conditions.) <|cite_end|> breaks down in the high-noise regime. For example, security of the model cannot be established when \(d > 1/2\), i.e. more than half the messages are lost on average.
It is reasonable to wonder if adversarial computational power can at all be tolerated in the high-noise regime, for instance, when almost all messages are lost. Our work answers this question in the affirmative, by expanding the known security threshold for the \(0 \dash \infty\) model. In particular, our sufficient condition for security is \( \frac{\beta}{1-\beta} < 1-d\). Figure~\ref{fig: Region} shows this improvement.
\begin{figure}[t]
\centering
\includegraphics[width = 0.6 \textwidth]{Figures/Region-Comparison.pdf}
\caption{Characterizing the region of tolerable adversarial power}
\label{fig: Region}
\end{figure}
Our method of analysis is significantly different from that in <|cite_start|> (Reference: The Longest-Chain Protocol Under Random Delays: In the field of distributed consensus and blockchains, the synchronous communication model assumes that all messages between honest parties are delayed at most by a known constant $\Delta$. Recent literature establishes that the longest-chain blockchain protocol is secure under the synchronous model. However, for a fixed mining rate, the security guarantees degrade with $\Delta$. We analyze the performance of the longest-chain protocol under the assumption that the communication delays are random, independent, and identically distributed. This communication model allows for distributions with unbounded support and is a strict generalization of the synchronous model. We provide safety and liveness guarantees with simple, explicit bounds on the failure probabilities. These bounds hold for infinite-horizon executions and decay exponentially with the security parameter. In particular, we show that the longest-chain protocol has good security guarantees when delays are sporadically large and possibly unbounded, which is reflective of real-world network conditions.) <|cite_end|>: we introduce a transmission-graph that captures the history of communication delays between blocks, and identify special paths in the graph that are linked to random variables which are amenable to analysis. Specifically, we identify special objects such as forward-special and backward-special blocks, and associate with them random variables such as forward-unheard and backward-unheard. Our technique also presents a new approach to infer the inclusion of special blocks in the chain held by an honest user through the concept of user-unheard-criterion. The method of analysis is inspired from, where security of the synchronous model is established by considering races between honest and adversarial chains. However, our approach does not rely on message delays being finite, and we hope that the tools we introduce are of utility in the study of consensus mechanisms in more general settings, as well as of independent mathematical interest.
Our contributions are summarized as:
\begin{itemize}
\item We introduce the \(0\dash\infty\) model as a playground for studying the impact of message losses. This model provides a starting step for more complex models involving message losses.
\item We introduce combinatorial objects of independent interest such as the transmission-graph. We also identify random variables (forward-unheard, backward-unheard) associated with this graph that are amenable to analysis, and introduce the user-unheard-criterion. These concepts may be utilized in security analysis of blockchain protocols in more general settings.
\item We prove that the longest-chain protocol is secure in the \(0\dash\infty\) model if certain conditions are met. These conditions are fairly general, and considerably extend the known threshold of tolerable adversarial power. In this regime, we show that the probability of security violation decays almost exponentially in the security parameter.
\end{itemize} <|paper_end|> | [
"<|reference_start|> Tight consistency bounds for bitcoin: We establish the optimal security threshold for the Bitcoin protocol in terms of adversarial hashing power, honest hashing power, and network delays. Specifically, we prove that the protocol is secure if [ra < 1/Δ0 + 1/rh,,] where rh is the expected number of honest proof-of-work successes in unit time, ra is the expected number of adversarial successes, and no message is delayed by more than Δ0 time units. In this regime, the protocol guarantees consistency and liveness with exponentially decaying failure probabilities. Outside this region, the simple private chain attack prevents consensus. Our analysis immediately applies to any Nakamoto-style proof-of-work protocol; in the full version of this paper we also present the adaptations needed to apply it in the proof-of-stake setting, establishing a similar threshold there. <|reference_end|>",
"<|reference_start|> How Does Nakamoto Set His Clock? Full Analysis of Nakamoto Consensus in Bounded Delay Networks: . Nakamoto consensus, arguably the most exciting development in distributed computing in the last few years, is in a sense a recasting of the traditional state-machine-replication problem in an unauthenticated setting, where furthermore parties come and go without warning. The protocol relies on a cryptographic primitive known as proof of work (PoW) which is used to throttle message passing. Importantly, the PoW difficulty level is appropriately adjusted throughout the course of the protocol execution relying on the blockchain’s timekeeping ability. While the original formulation was only accompanied by rudimentary analysis, significant and steady progress has been made in abstracting the protocol’s properties and providing a formal analysis under various restrictions and protocol simplifications. Still, a full analysis of the protocol that includes its target recalculation and, notably, the timestamp adjustment mechanism —specifically, the protocol allows incoming block timestamps in the near future, as determined by a protocol parameter, and rejects blocks that have a timestamp in the past of the median time of a specific number of blocks on-chain (namely, 11)— which equip it to operate in its intended setting of bounded communication delays, imperfect clocks and dynamic participation, has remained open. The gap is that Nakamoto’s protocol fundamentally depends on the blockchain itself to be a consistent timekeeper that should advance roughly on par with real time. In order to tackle this question we introduce a new analytical tool that we call hot-hand executions , which capture the regular occurrence of high concentration of honestly generated blocks, and correspondingly put forth and prove a new blockchain property called concentrated chain quality , which may be of independent interest. Utilizing these tools and techniques we demonstrate that Nakamoto’s protocol achieves, under suitable conditions, safety, liveness as well as (consistent) timekeeping. <|reference_end|>",
"<|reference_start|> Ebb-and-Flow Protocols: A Resolution of the Availability-Finality Dilemma: The CAP theorem says that no blockchain can be live under dynamic participation and safe under temporary network partitions. To resolve this availability-finality dilemma, we formulate a new class of flexible consensus protocols, ebb-and-flow protocols, which support a full dynamically available ledger in conjunction with a finalized prefix ledger. The finalized ledger falls behind the full ledger when the network partitions but catches up when the network heals. Gasper, the current candidate protocol for Ethereum 2.0's beacon chain, combines the finality gadget Casper FFG with the LMD GHOST fork choice rule and aims to achieve this property. However, we discovered an attack in the standard synchronous network model, highlighting a general difficulty with existing finality-gadget-based designs. We present a construction of provably secure ebb-and-flow protocols with optimal resilience. Nodes run an off-the-shelf dynamically available protocol, take snapshots of the growing available ledger, and input them into a separate off-the-shelf BFT protocol to finalize a prefix. We explore connections with flexible BFT and improve upon the state-of-the-art for that problem. <|reference_end|>",
"<|reference_start|> The Longest-Chain Protocol Under Random Delays: In the field of distributed consensus and blockchains, the synchronous communication model assumes that all messages between honest parties are delayed at most by a known constant $\\Delta$. Recent literature establishes that the longest-chain blockchain protocol is secure under the synchronous model. However, for a fixed mining rate, the security guarantees degrade with $\\Delta$. We analyze the performance of the longest-chain protocol under the assumption that the communication delays are random, independent, and identically distributed. This communication model allows for distributions with unbounded support and is a strict generalization of the synchronous model. We provide safety and liveness guarantees with simple, explicit bounds on the failure probabilities. These bounds hold for infinite-horizon executions and decay exponentially with the security parameter. In particular, we show that the longest-chain protocol has good security guarantees when delays are sporadically large and possibly unbounded, which is reflective of real-world network conditions. <|reference_end|>"
] | [
2,
8,
15,
20
] | {"<|multi_cite_1_1|>": "ss-1058301", "<|multi_cite_1_2|>": "ss-679567", "<|multi_cite_2_1|>": "ss-924635", "<|multi_cite_2_2|>": "ss-717035", "<|multi_cite_3_1|>": "ss-1193241", "<|multi_cite_3_2|>": "ss-1669575", "<|multi_cite_4_1|>": "ss-1973161", "<|multi_cite_4_2|>": "ss-1669575", "<|multi_cite_4_3|>": "ss-1682172", "<|multi_cite_5_1|>": "ss-1167424", "<|multi_cite_5_2|>": "ss-978660", "<|multi_cite_5_3|>": "ss-1770328", "<|cite_6|>": "ss-846312", "<|multi_cite_7_2|>": "ss-924635", "<|cite_8|>": "ss-1167418", "<|cite_9|>": "arxiv-289202", "<|cite_10|>": "ss-1167424", "<|cite_11|>": "ss-1167424", "<|cite_12|>": "arxiv-318402", "<|cite_13|>": "arxiv-318402", "<|cite_14|>": "arxiv-318402", "<|cite_15|>": "arxiv-318402"} |
1809.06256 | <|paper_start|> Title: Sensor Transfer: Learning Optimal Sensor Effect Image Augmentation for Sim-to-Real Domain Adaptation
Abstract: Sensor Transfer: Learning Optimal Sensor Effect Image Augmentation for Sim-to-Real Domain Adaptation: Performance on benchmark datasets has drastically improved with advances in deep learning. Still, cross-dataset generalization performance remains relatively low due to the domain shift that can occur between two different datasets. This domain shift is especially exaggerated between synthetic and real datasets. Significant research has been done to reduce this gap, specifically via modeling variation in the spatial layout of a scene, such as occlusions, and scene environmental factors, such as time of day and weather effects. However, few works have addressed modeling the variation in the sensor domain as a means of reducing the synthetic to real domain gap. The camera or sensor used to capture a dataset introduces artifacts into the image data that are unique to the sensor model, suggesting that sensor effects may also contribute to domain shift. To address this, we propose a learned augmentation network composed of physically-based augmentation functions. Our proposed augmentation pipeline transfers specific effects of the sensor model -- chromatic aberration, blur, exposure, noise, and color temperature -- from a real dataset to a synthetic dataset. We provide experiments that demonstrate that augmenting synthetic training datasets with the proposed learned augmentation framework reduces the domain gap between synthetic and real domains for object detection in urban driving scenes.
Introduction
Synthetic datasets are designed to contain numerous spatial and environmental features that are found in the real domain: images captured during different times of day, in various weather conditions, and in structured urban environments.
However, in spite of these shared features and high levels of photorealism, images from synthetic datasets are noticeably stylistically distinct from real images. Figure~\ref{fig:st_intro_fig} shows a side-by-side comparison of two of widely-used real benchmark vehicle datasets, KITTI <|cite_start|> (Reference: Are We Ready for Autonomous Driving? The {KITTI} Vision Benchmark Suite: Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.) <|cite_end|> <|cite_start|> (Reference: A New Performance Measure and Evaluation Benchmark for Road Detection Algorithms: Detecting the road area and ego-lane ahead of a vehicle is central to modern driver assistance systems. While lane-detection on well-marked roads is already available in modern vehicles, finding the boundaries of unmarked or weakly marked roads and lanes as they appear in inner-city and rural environments remains an unsolved problem due to the high variability in scene layout and illumination conditions, amongst others. While recent years have witnessed great interest in this subject, to date no commonly agreed upon benchmark exists, rendering a fair comparison amongst methods difficult. In this paper, we introduce a novel open-access dataset and benchmark for road area and ego-lane detection. Our dataset comprises 600 annotated training and test images of high variability from the KITTI autonomous driving project, capturing a broad spectrum of urban road scenes. For evaluation, we propose to use the 2D Bird's Eye View (BEV) space as vehicle control usually happens in this 2D world, requiring detection results to be represented in this very same space. Furthermore, we propose a novel, behavior-based metric which judges the utility of the extracted ego-lane area for driver assistance applications by fitting a driving corridor to the road detection results in the BEV. We believe this to be important for a meaningful evaluation as pixel-level performance is of limited value for vehicle control. State-of-the-art road detection algorithms are used to demonstrate results using classical pixel-level metrics in perspective and BEV space as well as the novel behavior-based performance measure. All data and annotations are made publicly available on the KITTI online evaluation website in order to serve as a common benchmark for road terrain detection algorithms.) <|cite_end|>, Cityscapes <|cite_start|> (Reference: The Cityscapes Dataset for Semantic Urban Scene Understanding: Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.) <|cite_end|>, and a state-of-the-art synthetic dataset, GTA \textit{Sim10k} <|cite_start|> (Reference: Playing for Data: Ground Truth from Computer Games: Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set.) <|cite_end|> <|cite_start|> (Reference: Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?: Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning's application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers.) <|cite_end|>. These differences can be quantified; a performance drop is observed between training and testing deep neural networks (DNNs) between the synthetic and real domains <|cite_start|> (Reference: Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?: Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning's application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers.) <|cite_end|>. This suggests that real and synthetic datasets differ in their global pixel statistics.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/ST_intro_fig_final.png}
\end{center}
\caption{A comparison of images sampled from the real domain, KITTI Benchmark dataset (shown in the left hand column), images taken from the Cityscapes dataset (shown in the center column), and images from GTA \textit{Sim10k} dataset (shown in the right hand column). Note that each dataset has a distinct visual style, specifically differing color cast, brightness, and blur.}
\label{fig:st_intro_fig}
\end{figure}
Domain adaptation methods attempt to minimize such dissimilarities between synthetic and real datasets that result from an uneven representation of visual information in one domain compared to the other.
Recent domain adaptation research has focused on learning salient visual features from real data -- specifically scene lighting, scene background, weather, and occlusions -- using generative adversarial frameworks in an effort to better model the representation of these visual elements in synthetic training sets <|cite_start|> (Reference: Image De-raining Using a Conditional Generative Adversarial Network: Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.) <|cite_end|> <|cite_start|> (Reference: Adversarially Tuned Scene Generation: Generalization performance of trained computer vision systems that use computer graphics (CG) generated data is not yet effective due to the concept of 'domain-shift' between virtual and real data. Although simulated data augmented with a few real world samples has been shown to mitigate domain shift and improve transferability of trained models, guiding or bootstrapping the virtual data generation with the distributions learnt from target real world domain is desired, especially in the fields where annotating even few real images is laborious (such as semantic labeling, and intrinsic images etc.). In order to address this problem in an unsupervised manner, our work combines recent advances in CG (which aims to generate stochastic scene layouts coupled with large collections of 3D object models) and generative adversarial training (which aims train generative models by measuring discrepancy between generated and real data in terms of their separability in the space of a deep discriminatively-trained classifier). Our method uses iterative estimation of the posterior density of prior distributions for a generative graphical model. This is done within a rejection sampling framework. Initially, we assume uniform distributions as priors on the parameters of a scene described by a generative graphical model. As iterations proceed the prior distributions get updated to distributions that are closer to the (unknown) distributions of target data. We demonstrate the utility of adversarially tuned scene generation on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene semantic labeling with a deep convolutional net (DeepLab). We realized performance improvements by 2.28 and 3.14 points (using the IoU metric) between the DeepLab models trained on simulated sets prepared from the scene generation models before and after tuning to CityScapes and CamVid respectively.) <|cite_end|> <|cite_start|> (Reference: Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding: This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising $3808$ real foggy images, with pixel-level semantic annotations for $16$ images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.) <|cite_end|>.
However, little work has focused on modelling realistic, physically-based augmentations of synthetic data.
Carlson et al. <|cite_start|> (Reference: Modeling camera effects to improve deep vision for real and synthetic data: Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physically-based augmentation pipeline to vary sensor effects -- specifically, chromatic aberration, blur, exposure, noise, and color cast -- across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.) <|cite_end|> demonstrate that randomizing across the sensor domain significantly improves performance over standard augmentation techniques.
The information loss that results from the interaction between the camera model and lighting in the environment is not generally modelled in rendering engines, despite the fact that it can greatly influence the pixel-level artifacts, distortions, and dynamic range, and thus the global visual style induced in each image <|cite_start|> (Reference: Modeling the space of camera response functions: Many vision applications require precise measurement of scene radiance. The function relating scene radiance to image intensity of an imaging system is called the camera response. We analyze the properties that all camera responses share. This allows us to find the constraints that any response function must satisfy. These constraints determine the theoretical space of all possible camera responses. We have collected a diverse database of real-world camera response functions (DoRF). Using this database, we show that real-world responses occupy a small part of the theoretical space of all possible responses. We combine the constraints from our theoretical space with the data from DoRF to create a low-parameter empirical model of response (EMoR). This response model allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart. We also show that the model can be used to accurately estimate the camera response from images of an arbitrary scene taken using different exposures. The DoRF database and the EMoR model can be downloaded at http://www.cs.columbia.edu/CAVE.) <|cite_end|> <|cite_start|> (Reference: {Learning to estimate and remove non-uniform image blur: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multi-label energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa's method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti~et al.) <|cite_end|> <|cite_start|> (Reference: Practical Poissonian-Gaussian Noise Modeling and Fitting for Single-image Raw-data: We present a simple and usable noise model for the raw-data of digital imaging sensors. This signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data. We further explicitly take into account the clipping of the data (over- and under-exposure), faithfully reproducing the nonlinear response of the sensor. We propose an algorithm for the fully automatic estimation of the model parameters given a single noisy image. Experiments with synthetic images and with real raw-data from various sensors prove the practical applicability of the method and the accuracy of the proposed model.) <|cite_end|> <|cite_start|> (Reference: On sensor bias in experimental methods for comparing interest-point, saliency, and recognition algorithms: Most current algorithm evaluation protocols use large image databases, but give little consideration to imaging characteristics used to create the data sets. This paper evaluates the effects of camera shutter speed and voltage gain under simultaneous changes in illumination and demonstrates significant differences in the sensitivities of popular vision algorithms under variable illumination, shutter speed, and gain. These results show that offline data sets used to evaluate vision algorithms typically suffer from a significant sensor specific bias which can make many of the experimental methodologies used to evaluate vision algorithms unable to provide results that generalize in less controlled environments. We show that for typical indoor scenes, the different saturation levels of the color filters are easily reached, leading to the occurrence of localized saturation which is not exclusively based on the scene radiance but on the spectral density of individual colors present in the scene. Even under constant illumination, foreshortening effects due to surface orientation can affect feature detection and saliency. Finally, we demonstrate that active and purposive control of the shutter speed and gain can lead to significantly more reliable feature detection under varying illumination and nonconstant viewpoints.) <|cite_end|> <|cite_start|> (Reference: SUN RGB-D: A rgb-d scene understanding benchmark suite: Although RGB-D sensors have enabled major break-throughs for several vision tasks, such as 3D reconstruction, we have not attained the same level of success in high-level scene understanding. Perhaps one of the main reasons is the lack of a large-scale benchmark with 3D annotations and 3D evaluation metrics. In this paper, we introduce an RGB-D benchmark suite for the goal of advancing the state-of-the-arts in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,335 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 64,595 3D bounding boxes with accurate object orientations, as well as a 3D room layout and scene category for each image. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias.) <|cite_end|> <|cite_start|> (Reference: Understanding How Image Quality Affects Deep Neural Networks: Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Visual Representation Learning by Context Prediction: This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.) <|cite_end|>.
In this study, we build upon <|cite_start|> (Reference: Modeling camera effects to improve deep vision for real and synthetic data: Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physically-based augmentation pipeline to vary sensor effects -- specifically, chromatic aberration, blur, exposure, noise, and color cast -- across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.) <|cite_end|> to work towards closing the gap between real and synthetic data domains.
We propose a novel learning framework that performs \textit{sensor transfer} on synthetic data. That is, the network learns to transfer the real sensor effect domain -- blur, exposure, noise, color cast, and chromatic aberration -- to synthetic images via a generative augmentation network.
We demonstrate that augmenting relatively small labeled datasets using \textit{sensor transfer} generates more robust and generalizable training datasets that improve the performance of DNNs for object detection and semantic segmentation tasks in urban driving scenes for both real and synthetic visual domains.
This paper is organized as follows: Section~\ref{sec:background} presents related background work; section~\ref{sec:Methods} details the proposed \textit{sensor transfer} learning framework; section ~\ref{sec:Experiments} describes experiments and discusses results of these experiments and section~\ref{sec:concl} concludes the paper. Code will be made publicly available. <|paper_end|> | [
"<|reference_start|> A New Performance Measure and Evaluation Benchmark for Road Detection Algorithms: Detecting the road area and ego-lane ahead of a vehicle is central to modern driver assistance systems. While lane-detection on well-marked roads is already available in modern vehicles, finding the boundaries of unmarked or weakly marked roads and lanes as they appear in inner-city and rural environments remains an unsolved problem due to the high variability in scene layout and illumination conditions, amongst others. While recent years have witnessed great interest in this subject, to date no commonly agreed upon benchmark exists, rendering a fair comparison amongst methods difficult. In this paper, we introduce a novel open-access dataset and benchmark for road area and ego-lane detection. Our dataset comprises 600 annotated training and test images of high variability from the KITTI autonomous driving project, capturing a broad spectrum of urban road scenes. For evaluation, we propose to use the 2D Bird's Eye View (BEV) space as vehicle control usually happens in this 2D world, requiring detection results to be represented in this very same space. Furthermore, we propose a novel, behavior-based metric which judges the utility of the extracted ego-lane area for driver assistance applications by fitting a driving corridor to the road detection results in the BEV. We believe this to be important for a meaningful evaluation as pixel-level performance is of limited value for vehicle control. State-of-the-art road detection algorithms are used to demonstrate results using classical pixel-level metrics in perspective and BEV space as well as the novel behavior-based performance measure. All data and annotations are made publicly available on the KITTI online evaluation website in order to serve as a common benchmark for road terrain detection algorithms. <|reference_end|>",
"<|reference_start|> Modeling the space of camera response functions: Many vision applications require precise measurement of scene radiance. The function relating scene radiance to image intensity of an imaging system is called the camera response. We analyze the properties that all camera responses share. This allows us to find the constraints that any response function must satisfy. These constraints determine the theoretical space of all possible camera responses. We have collected a diverse database of real-world camera response functions (DoRF). Using this database, we show that real-world responses occupy a small part of the theoretical space of all possible responses. We combine the constraints from our theoretical space with the data from DoRF to create a low-parameter empirical model of response (EMoR). This response model allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart. We also show that the model can be used to accurately estimate the camera response from images of an arbitrary scene taken using different exposures. The DoRF database and the EMoR model can be downloaded at http://www.cs.columbia.edu/CAVE. <|reference_end|>",
"<|reference_start|> {Learning to estimate and remove non-uniform image blur: This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multi-label energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa's method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti~et al. <|reference_end|>",
"<|reference_start|> SUN RGB-D: A rgb-d scene understanding benchmark suite: Although RGB-D sensors have enabled major break-throughs for several vision tasks, such as 3D reconstruction, we have not attained the same level of success in high-level scene understanding. Perhaps one of the main reasons is the lack of a large-scale benchmark with 3D annotations and 3D evaluation metrics. In this paper, we introduce an RGB-D benchmark suite for the goal of advancing the state-of-the-arts in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,335 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 64,595 3D bounding boxes with accurate object orientations, as well as a 3D room layout and scene category for each image. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias. <|reference_end|>"
] | [
1,
10,
11,
14
] | {"<|multi_cite_1_1|>": "ss-705871", "<|multi_cite_1_2|>": "ss-1258701", "<|cite_2|>": "arxiv-95397", "<|multi_cite_3_1|>": "arxiv-103531", "<|multi_cite_3_2|>": "arxiv-107376", "<|cite_4|>": "arxiv-107376", "<|multi_cite_5_1|>": "arxiv-114814", "<|multi_cite_5_2|>": "arxiv-113609", "<|multi_cite_5_3|>": "arxiv-168220", "<|cite_6|>": "ss-913745", "<|multi_cite_7_1|>": "ss-2311376", "<|multi_cite_7_2|>": "ss-2192045", "<|multi_cite_7_3|>": "ss-767909", "<|multi_cite_7_4|>": "ss-913744", "<|multi_cite_7_5|>": "ss-848459", "<|multi_cite_7_6|>": "arxiv-95921", "<|multi_cite_7_7|>": "arxiv-78001", "<|cite_8|>": "ss-913745"} |
2201.02698 | <|paper_start|> Title: Development of Automatic Tree Counting Software from UAV Based Aerial Images With Machine Learning
Abstract: Development of Automatic Tree Counting Software from UAV Based Aerial Images With Machine Learning: Unmanned aerial vehicles (UAV) are used successfully in many application areas such as military, security, monitoring, emergency aid, tourism, agriculture, and forestry. This study aims to automatically count trees in designated areas on the Siirt University campus from high-resolution images obtained by UAV. Images obtained at 30 meters height with 20% overlap were stitched offline at the ground station using Adobe Photoshop's photo merge tool. The resulting image was denoised and smoothed by applying the 3x3 median and mean filter, respectively. After generating the orthophoto map of the aerial images captured by the UAV in certain regions, the bounding boxes of different objects on these maps were labeled in the modalities of HSV (Hue Saturation Value), RGB (Red Green Blue) and Gray. Training, validation, and test datasets were generated and then have been evaluated for classification success rates related to tree detection using various machine learning algorithms. In the last step, a ground truth model was established by obtaining the actual tree numbers, and then the prediction performance was calculated by comparing the reference ground truth data with the proposed model. It is considered that significant success has been achieved for tree count with an average accuracy rate of 87% obtained using the MLP classifier in predetermined regions.
Introduction
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{figures/figure_1.jpg}
\caption{Forest sizes on the basis of countries.}
\label{fig:world_forest}
\end{figure}
Since forest ecosystems retain more biodiversity than other ecosystems, they are a very important component for the continuity of life. Forests cover about 31 percent of the world's land area. As seen in Fig. \ref{fig:world_forest}, more than half of the world's forests are found in five countries including Russia, Brazil, Canada, the USA, and China <|cite_start|> (Reference: The State of the World’s Forests 2020: ) <|cite_end|>. Similarly, according to the data in Fig. \ref{fig:Turkey_forest}, approximately 27.6\% of the lands in Turkey are considered as forest areas. As it is known, forests are extremely important both for the country's economy and for a clean and sustainable ecosystem. Creating a tree inventory in our forests, which is a national wealth, is sustainable for the future. It is of vital importance in the development of afforestation policies and in preserving forest existence. As a result of the tree inventory study, it is possible to extract information such as the number, characteristics, location, health status of existing tree species. Yilmaz et al. as reported, the detection of information such as the boundaries of forest areas, the types and numbers of trees, height and location information allows many applications such as city planning, 3D city modeling, forestry and agricultural activities. Today, calculating the number of trees is a costly and error-prone process based on human observation and labor. Moreover, counting trees one by one by the staff poses a danger for some areas and carries the risk of making mistakes. On the other hand, sometimes using statistical methods, a generalization can be made in the selected geographical region based on the tree density in a certain area. This approach, in parallel with the convenience it provides, Calculations must be done carefully, as this may also cause an increase in the error rate. In order to overcome the problems mentioned above, it is possible to perform tree counting using image processing and machine learning. The basic requirements for this can be summarized as software that will calculate the number of trees as a result of processing the images obtained from the UAV. With the developing technology, the use of unmanned aerial vehicles (UAV) has become widespread. UAVs are used in many fields (military, emergency aid, agriculture, monitoring, security, etc.) within the framework of the license and authorization obtained from the general directorate of civil aviation. This study was carried out using DJI Advanced 4 Pro UAV. It is based on the principle of scanning the region at a certain height from the air and analyzing the captured images with the necessary software at the ground station.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{figures/figure_2.jpg}
\caption{Turkey's land usage.}
\label{fig:Turkey_forest}
\end{figure}
There are differences between UAV photogrammetry and conventional photogrammetry. UAV photogrammetry is designed for efficiency and using all data rather than accurate results in the context of local solution and local optimization. On the other hand, in traditional photogrammetry, global consistency, model validity, transactions are carried out within the framework of being correct, consistent, and compatible. In order for UAV photogrammetry to replace classical photogrammetry, developments in two areas are required. The first is the mathematical/statistical model used in UAV photogrammetry and the adaptation of their application point to traditional photogrammetry. The second is that the sensor camera lens structure and lens distortion information, which forms the basis of the study of classical photogrammetry, and the possibilities provided by physical conditions for mathematical model design, can also be used for UAV sensors <|cite_start|> (Reference: Evaluation of uav photogrammetric accuracy for mapping and earthworks computations: This study quantifies the accuracies achieved and tests the validity of an in-house developed Unmanned Aerial Vehicle (UAV) system employed in a stockpile volumetric survey. UAV photogrammetric results are compared with conventional GNSS survey results. To test the repeatability of the UAV system, multiple flights were flown over the same stockpile using different GNSS ground control, at different times and weather conditions. Positional accuracies of UAV photogrammetric results were found to be very similar to those from GNSS RTK survey, at the scale of photography flown. UAV stockpile volume results agreed with those from GNSS within 3 755 m3 (0.7%) on a 530 255 m3 pile. Stockpile volume comparisons between subsequent UAV surface models agreed within 877 m3 (0.2%) on the same pile. Geometric analysis of independent UAV photogrammetric models over the same area indicated that they could be considered the same at a 95% confidence level. We conclude that the UAV photogrammetric approach is, at the very lea...) <|cite_end|> <|cite_start|> (Reference: How accurate are UAV surveying methods ? White paper : How accurate are UAV surveying methods ?: ) <|cite_end|> . Although one of the best ways to obtain orthophoto maps is to use aircraft, some technical problems are encountered in this process. It is desirable that the images taken by airplanes or similar aircraft are in a vertical position relative to the earth as much as possible. However, in practice, this position is not possible due to the tilt and rotation of the aircraft. It is ideal if this curvature is usually less than 1°, and there are cases where it rarely exceeds 3°. Therefore, aerial photographs have such undesirable curvatures. Another problem with aerial photography is the so-called 'altitude shift'. This error consists of the height difference of artificial or natural objects in the field. Height shift points outward from picture midpoint is the slip. The height shift increases as you move away from the image midpoint. Apart from these errors, in aerial photographs; problems caused by film and photographic plane (film shrinkage, focal length error, etc.), distortions in camera lenses, distortions caused by atmospheric reflection and earth
problems such as distortions caused by its curvature may also occur. As a result, objects in the remotely sensed image may not be where they should be <|cite_start|> (Reference: Photogrammetric exploitation of ikonos imagery for mapping applications: The launch of IKONOS by Space Imaging opens a new era of high-resolution satellite imagery collection and mapping. The IKONOS satellite simultaneously acquires 1 m panchromatic and 4 m multi-spectral images in four bands that are suitable for high accuracy mapping applications. Space Imaging uses the rational function model (RFM), also known as rational polynomial camera model, instead of the physical IKONOS sensor model to communicate the imaging geometry. As revealed by recent studies from several researchers, the RFM retains the full capability of performing photogrammetric processing in absence of the physical sensor model. This paper presents some RFM-based processing methods and mapping applications developed for 3D feature extraction, orthorectification and RPC model refinement using IKONOS imagery. Comprehensive tests are performed to test the accuracy of 3D reconstruction and orthorectification and to validate the feasibility of the model refinement techniques.) <|cite_end|>. Moranduzzo and Melgani <|cite_start|> (Reference: Automatic car counting method for unmanned aerial vehicle images: This paper presents a solution to solve the car detection and counting problem in images acquired by means of unmanned aerial vehicles (UAVs). UAV images are characterized by a very high spatial resolution (order of few centimeters), and consequently by an extremely high level of details which calls for appropriate automatic analysis methods. The proposed method starts with a screening step of asphalted zones in order to restrict the areas where to detect cars and thus to reduce false alarms. Then, it performs a feature extraction process based on scalar invariant feature transform thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, it discriminates between keypoints assigned to cars and all the others, by means of a support vector machine classifier. The last step of our method is focused on the grouping of the keypoints belonging to the same car in order to get a “one keypoint-one car” relationship. Finally, the number of cars present in the scene is given by the number of final keypoints identified. The experimental results obtained on a real UAV scene characterized by a spatial resolution of 2 cm show that the proposed method exhibits a promising car counting accuracy.) <|cite_end|> presented a SVM (Support Vector Machine) based solution to solve the problem of automobile detection and counting in images obtained by unmanned aerial vehicles (UAVs). Bazi et al. <|cite_start|> (Reference: An automatic approach for palm tree counting in uav images: In this paper, we develop an automatic method for counting palm trees in UAV images. First we extract a set of keypoints using the Scale Invariant Feature Transform (SIFT). Then, we analyze these keypoints with an Extreme Learning Machine (ELM) classifier a priori trained on a set of palm and no-palm keypoints. As output, the ELM classifier will mark each detected palm tree by several keypoints. Then, in order to capture the shape of each tree, we propose to merge these keypoints with an active contour method based on level-sets (LS). Finally, we further analyze the texture of the regions obtained by LS with local binary patterns (LBPs) to distinguish palm trees from other vegetations. Experimental results obtained on a UAV image acquired over a palm farm are reported and discussed.) <|cite_end|> developed an automated method for counting palm trees in UAV images. First, a number of key points are extracted using Scaled Invariant Feature Transformation. These key points were then analyzed with a pre-trained Extreme Learning Machine (ELM) classifier with and without a set of palm trees. As output, the ELM classifier marks each detected palm tree with a few key points. It is then proposed to combine these key points with an active contour method based on level sets to capture the shape of each tree. Finally, the texture of the regions obtained by the datasets was analyzed with local binary patterns to distinguish palm trees from other plants. Mohan et al. <|cite_start|> (Reference: Individual tree detection from unmanned aerial vehicle (uav) derived canopy height model in an open canopy mixed conifer forest: Advances in Unmanned Aerial Vehicle (UAV) technology and data processing capabilities have made it feasible to obtain high-resolution imagery and three dimensional (3D) data which can be used for forest monitoring and assessing tree attributes. This study evaluates the applicability of low consumer grade cameras attached to UAVs and structure-from-motion (SfM) algorithm for automatic individual tree detection (ITD) using a local-maxima based algorithm on UAV-derived Canopy Height Models (CHMs). This study was conducted in a private forest at Cache Creek located east of Jackson city, Wyoming. Based on the UAV-imagery, we allocated 30 field plots of 20 m × 20 m. For each plot, the number of trees was counted manually using the UAV-derived orthomosaic for reference. A total of 367 reference trees were counted as part of this study and the algorithm detected 312 trees resulting in an accuracy higher than 85% (F-score of 0.86). Overall, the algorithm missed 55 trees (omission errors), and falsely detected 46 trees (commission errors) resulting in a total count of 358 trees. We further determined the impact of fixed tree window sizes (FWS) and fixed smoothing window sizes (SWS) on the ITD accuracy, and detected an inverse relationship between tree density and FWS. From our results, it can be concluded that ITD can be performed with an acceptable accuracy (F > 0.80) from UAV-derived CHMs in an open canopy forest, and has the potential to supplement future research directed towards estimation of above ground biomass and stem volume from UAV-imagery.) <|cite_end|> evaluated the applicability of low-consumer class cameras attached to the UAV and the applicability of the motion-structure algorithm for automatic individual tree detection using a local maxima-based algorithm in the Canopy Height Models obtained from the UAV. The number of trees was manually counted using the UAV-derived orthomosaic as a reference for each plot. A total of 367 reference trees were counted as part of the study and the algorithm detected 312 trees resulting in greater than 85\% accuracy (F-score 0.86). Overall, the algorithm missed 55 trees and incorrectly detected 46 trees resulting in a total tree count of 358. Shafri et al. <|cite_start|> (Reference: Semi-automatic detection and counting of oil palm trees from high spatial resolution airborne imagery: Plantation inventory and management require a range of fine-scale remote-sensing data. Remote-sensing images with high spatial and spectral resolution are an efficient source of such information. This article presents an approach to the extraction and counting of oil palm trees from high spatial resolution airborne imagery data. Counting oil palm trees is a crucial problem in specific agricultural areas, especially in Malaysia. The proposed scheme comprises six major parts: (1) discrimination of oil palms from non-oil palms using spectral analysis, (2) texture analysis, (3) edge enhancement, (4) segmentation process, (5) morphological analysis and (6) blob analysis. The average accuracy obtained was 95%, which indicates that high spatial resolution airborne imagery data with an appropriate assessment technique have the potential to provide us with vital information for oil palm plantation management. Information on the number of oil palm trees is crucial to the ability of plantation management to assess the value of the plantation and to monitor its production.) <|cite_end|> utilizing high spatial resolution (1 m) airborne hyperspectral data, a diagram was created to use various approaches such as texture analysis, edge enhancement, morphology analysis, and bubble analysis to perform automatic tree counting.
Santoso et al. <|cite_start|> (Reference: A simple method for detection and counting of oil palm trees using high-resolution multispectral satellite imagery: ABSTRACT In the past, oil palm density has been determined by manually counting trees every year in oil palm plantations. The measurement of density provides important data related to palm productivity, fertilizer needed, weed control costs in a circle around each tree, labourers needed, and needs for other activities. Manual counting requires many workers and has potential problems related to accuracy. Remote sensing provides a potential approach for counting oil palm trees. The main objective of this study is to build a robust and user-friendly method that will allow oil palm managers to count oil palm trees using a remote sensing technique. The oil palm trees analysed in this study have different ages and densities. QuickBird imagery was applied with the six pansharpening methods and was compared with panchromatic QuickBird imagery. The black and white imagery from a false colour composite of pansharpening imagery was processed in three ways: (1) oil palm tree detection, (2) delineation of the oil palm area using the red band, and (3) counting oil palm trees and accuracy assessment. For oil palm detection, we used several filters that contained a Sobel edge detector; texture analysis co-occurrence; and dilate, erode, high-pass, and opening filters. The results of this study improved upon the accuracy of several previous research studies that had an accuracy of about 90–95%. The results in this study show (1) modified intensity-hue-saturation (IHS) resolution merge is suitable for 16-year-old oil palm trees and have rather high density with 100% accuracy; (2) colour normalized (Brovey) is suitable for 21-year-old oil palm trees and have low density with 99.5% accuracy; (3) subtractive resolution merge is suitable for 15- and 18-year-old oil palm trees and have a rather high density with 99.8% accuracy; (4) PC spectral sharpening with 99.3% accuracy is suitable for 10-year-old oil palm trees and have low density; and (5) for all study object conditions, colour normalized (Brovey) and wavelet resolution merge are two pansharpening methods that are suitable for oil palm tree extraction and counting with 98.9% and 98.4% accuracy, respectively.) <|cite_end|> developed a simple and user-friendly approach for the detection and counting of palm trees. As a result of the study, the researchers obtained an accuracy rate between 90\% and 95\%. Srestasathiern and Rakwatin <|cite_start|> (Reference: Oil palm tree detection with high resolution multi-spectral satellite imagery: Oil palm tree is an important cash crop in Thailand. To maximize the productivity from planting, oil palm plantation managers need to know the number of oil palm trees in the plantation area. In order to obtain this information, an approach for palm tree detection using high resolution satellite images is proposed. This approach makes it possible to count the number of oil palm trees in a plantation. The process begins with the selection of the vegetation index having the highest discriminating power between oil palm trees and background. The index having highest discriminating power is then used as the primary feature for palm tree detection. We hypothesize that oil palm trees are located at the local peak within the oil palm area. To enhance the separability between oil palm tree crowns and background, the rank transformation is applied to the index image. The local peak on the enhanced index image is then detected by using the non-maximal suppression algorithm. Since both rank transformation and non-maximal suppression are window based, semi-variogram analysis is used to determine the appropriate window size. The performance of the proposed method was tested on high resolution satellite images. In general, our approach uses produced very accurate results, e.g., about 90 percent detection rate when compared with manual labeling.) <|cite_end|> propose the local vertex detection hypothesis for the detection of trees. In this approach, each peak represents the highest point of each tree and the distinctive feature of the vegetation index is utilized. Using this approach, an accuracy rate of approximately 90\% has been achieved. Shao et al. <|cite_start|> (Reference: A model for predicting aeolian sand drift and dust entrainment on scales from paddock to region: This paper describes a Wind Erosion Assessment Model (WEAM) for the estimation of sand drift and dust entrainment in agricultural areas. Both the sand drift and dust entrainment parts of the model are physically based, utilising a combination of established and recent theoretical and experimental results. Key components of the model include the Owen equation for the saltation flux; the observed and theoretically predicted proportionality between saltation flux and dust entrainment by saltation bombardment; theoretical and experimental results on the amelioration of wind erosion by nonerodible roughness; and new experimental results on the suppression of erosion by surface moisture. The size distribution of the particles on the soil surface (in their natural state) is used as a primary parameter. The model is restricted to a description of the mobilisation of sand and dust in erosion source areas, and specifically excludes treatment of 2 groups of related processes: dust transport away from source areas and its ultimate deposition; and evolution of surface properties, by the wind erosion process itself, by other weathering processes, or by management intervention. The results of the model are compared with data from a portable wind erosion tunnel, and with direct wind erosion measurements at paddock scale. By offering a synthesis of available physical knowledge of sand drift and dust entrainment, the model also indicates key areas of uncertainty.) <|cite_end|> carried out analyzes in the context of total biomass. According to the authors, it is of great importance to detect trees that are between certain areas for biological and commercial reasons. The determination of the number of trees in a particular location is an important indicator that reflects the productive capacity of the related species in that area. Site productive capacity refers to the total biomass (the amount of living beings in a given growing area) produced by a stand at a given site at all stages of its development, where the stand fully utilizes the resources necessary to grow wood. Avdan et al. conducted various researches on the field of archaeology by using the data obtained from unmanned aerial vehicles. In their study they produced orthophoto maps by changing the parameters of different heights and different overlay ratios. Muharrem and Murat created an orthophoto digital elevation model and a digital terrain model by processing the digital images they obtained using the UAV in Pix4D software. While making object-based classification, images were separated into meaningful clusters by segmentation and segments within the same threshold value were classified according to the determined index values. Gurbuz et al. performed tree detection from very high resolution RGB images obtained from UAV. For this, firstly, digital surface model (DEM) was obtained from the images by using orthophoto and automatic matching techniques. From the created orthophoto, four test areas were selected according to the tree density. segmentation and classification of the trees were made with object-based method in these test areas. In the next step, the peak points of the trees in the classified images were determined automatically. Finally, a reference data was created by obtaining the actual positions (geometric center points) of the trees by manual method, and accuracy analyzes were performed by comparing the peak points (local maximum) of the trees obtained by the automatic method with the reference data. 96\% , 82\%, 96\%, 47\% success rates was achieved in the first test, second, third and fourth test areas, respectively according to the results obtained. <|paper_end|> | [
"<|reference_start|> Evaluation of uav photogrammetric accuracy for mapping and earthworks computations: This study quantifies the accuracies achieved and tests the validity of an in-house developed Unmanned Aerial Vehicle (UAV) system employed in a stockpile volumetric survey. UAV photogrammetric results are compared with conventional GNSS survey results. To test the repeatability of the UAV system, multiple flights were flown over the same stockpile using different GNSS ground control, at different times and weather conditions. Positional accuracies of UAV photogrammetric results were found to be very similar to those from GNSS RTK survey, at the scale of photography flown. UAV stockpile volume results agreed with those from GNSS within 3 755 m3 (0.7%) on a 530 255 m3 pile. Stockpile volume comparisons between subsequent UAV surface models agreed within 877 m3 (0.2%) on the same pile. Geometric analysis of independent UAV photogrammetric models over the same area indicated that they could be considered the same at a 95% confidence level. We conclude that the UAV photogrammetric approach is, at the very lea... <|reference_end|>",
"<|reference_start|> Photogrammetric exploitation of ikonos imagery for mapping applications: The launch of IKONOS by Space Imaging opens a new era of high-resolution satellite imagery collection and mapping. The IKONOS satellite simultaneously acquires 1 m panchromatic and 4 m multi-spectral images in four bands that are suitable for high accuracy mapping applications. Space Imaging uses the rational function model (RFM), also known as rational polynomial camera model, instead of the physical IKONOS sensor model to communicate the imaging geometry. As revealed by recent studies from several researchers, the RFM retains the full capability of performing photogrammetric processing in absence of the physical sensor model. This paper presents some RFM-based processing methods and mapping applications developed for 3D feature extraction, orthorectification and RPC model refinement using IKONOS imagery. Comprehensive tests are performed to test the accuracy of 3D reconstruction and orthorectification and to validate the feasibility of the model refinement techniques. <|reference_end|>",
"<|reference_start|> Automatic car counting method for unmanned aerial vehicle images: This paper presents a solution to solve the car detection and counting problem in images acquired by means of unmanned aerial vehicles (UAVs). UAV images are characterized by a very high spatial resolution (order of few centimeters), and consequently by an extremely high level of details which calls for appropriate automatic analysis methods. The proposed method starts with a screening step of asphalted zones in order to restrict the areas where to detect cars and thus to reduce false alarms. Then, it performs a feature extraction process based on scalar invariant feature transform thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, it discriminates between keypoints assigned to cars and all the others, by means of a support vector machine classifier. The last step of our method is focused on the grouping of the keypoints belonging to the same car in order to get a “one keypoint-one car” relationship. Finally, the number of cars present in the scene is given by the number of final keypoints identified. The experimental results obtained on a real UAV scene characterized by a spatial resolution of 2 cm show that the proposed method exhibits a promising car counting accuracy. <|reference_end|>",
"<|reference_start|> Semi-automatic detection and counting of oil palm trees from high spatial resolution airborne imagery: Plantation inventory and management require a range of fine-scale remote-sensing data. Remote-sensing images with high spatial and spectral resolution are an efficient source of such information. This article presents an approach to the extraction and counting of oil palm trees from high spatial resolution airborne imagery data. Counting oil palm trees is a crucial problem in specific agricultural areas, especially in Malaysia. The proposed scheme comprises six major parts: (1) discrimination of oil palms from non-oil palms using spectral analysis, (2) texture analysis, (3) edge enhancement, (4) segmentation process, (5) morphological analysis and (6) blob analysis. The average accuracy obtained was 95%, which indicates that high spatial resolution airborne imagery data with an appropriate assessment technique have the potential to provide us with vital information for oil palm plantation management. Information on the number of oil palm trees is crucial to the ability of plantation management to assess the value of the plantation and to monitor its production. <|reference_end|>"
] | [
1,
3,
4,
7
] | {"<|cite_1|>": "ss-1751709", "<|multi_cite_5_2|>": "ss-2416006", "<|multi_cite_5_3|>": "ss-2416007", "<|multi_cite_6_2|>": "ss-2416008", "<|cite_7|>": "ss-766317", "<|cite_8|>": "ss-2416009", "<|cite_9|>": "ss-2416010", "<|cite_10|>": "ss-2416011", "<|cite_11|>": "ss-2416012", "<|cite_12|>": "ss-2416013", "<|cite_13|>": "ss-2416014"} |
2307.12098 | <|paper_start|> Title: Even shorter proofs without new variables
Abstract: Even shorter proofs without new variables: Proof formats for SAT solvers have diversified over the last decade, enabling new features such as extended resolution-like capabilities, very general extension-free rules, inclusion of proof hints, and pseudo-boolean reasoning. Interference-based methods have been proven effective, and some theoretical work has been undertaken to better explain their limits and semantics. In this work, we combine the subsumption redundancy notion from (Buss, Thapen 2019) and the overwrite logic framework from (Rebola-Pardo, Suda 2018). Natural generalizations then become apparent, enabling even shorter proofs of the pigeonhole principle (compared to those from (Heule, Kiesl, Biere 2017)) and smaller unsatisfiable core generation.
Introduction
\label{sec:intro}
The impressive recent improvements in SAT solving have come coupled with the need to ascertain their results.
While satisfiability results are straightforward to check, unsatisfiability results require massive
proofs, sometimes petabytes in size <|cite_start|> (Reference: Solving and Verifying the boolean Pythagorean Triples problem via Cube-and-Conquer: The boolean Pythagorean Triples problem has been a longstanding open problem in Ramsey Theory: Can the set N = $\{1, 2, ...\}$ of natural numbers be divided into two parts, such that no part contains a triple $(a,b,c)$ with $a^2 + b^2 = c^2$ ? A prize for the solution was offered by Ronald Graham over two decades ago. We solve this problem, proving in fact the impossibility, by using the Cube-and-Conquer paradigm, a hybrid SAT method for hard problems, employing both look-ahead and CDCL solvers. An important role is played by dedicated look-ahead heuristics, which indeed allowed to solve the problem on a cluster with 800 cores in about 2 days. Due to the general interest in this mathematical problem, our result requires a formal proof. Exploiting recent progress in unsatisfiability proofs of SAT solvers, we produced and verified a proof in the DRAT format, which is almost 200 terabytes in size. From this we extracted and made available a compressed certificate of 68 gigabytes, that allows anyone to reconstruct the DRAT proof for checking.) <|cite_end|> <|cite_start|> (Reference: Schur Number Five: We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number $n$ such that there exists a five-coloring of the positive numbers up to $n$ without a monochromatic solution of the equation $a + b = c$? We obtained the solution, $n = 160$, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems.) <|cite_end|>.
The search for proof systems that enable both easy proof generation and smaller proofs has yield many
achievements <|cite_start|> (Reference: Verification of proofs of unsatisfiability for CNF formulas: As SAT-algorithms become more and more complex, there is little chance of writing a SAT-solver that is free of bugs. So it is of great importance to be able to verify, the information returned by a SAT-solver. If the CNF formula to be tested is satisfiable, solution verification is trivial and can be easily done by the user. However, in the case of unsatisfiability, the user has to rely on the reputation of the SAT-solver. We describe an efficient procedure for checking the correctness of unsatisfiability proofs. As a by-product, the proposed procedure finds an unsatisfiable core of the initial CNF formula. The efficiency of the proposed procedure was tested on a representative set of large "real-life" CNF formulas from the formal verification domain.) <|cite_end|> <|cite_start|> (Reference: Short Proofs Without New Variables: ) <|cite_end|> <|cite_start|> (Reference: Complete and Efficient DRAT Proof Checking: DRAT proofs have become the standard for verifying unsatisfiability proofs emitted by modern SAT solvers. However, recent work showed that the specification of the format differs from its implementation in existing tools due to optimizations necessary for efficiency. Although such differences do not compromise soundness of DRAT checkers, the sets of correct proofs according to the specification and to the implementation are incomparable. We discuss how it is possible to design DRAT checkers faithful to the specification by carefully modifying the standard optimization techniques. We implemented such modifications in a configurable DRAT checker. Our experimental results show negligible overhead due to these modifications, suggesting that efficient verification of the DRAT specification is possible. Furthermore, we show that the differences between specification and implementation of DRAT often arise in practice.) <|cite_end|> <|cite_start|> (Reference: Frying the egg, roasting the chicken: unit deletions in DRAT proofs: The clausal proof format DRAT is the standard de facto to certify SAT solvers' unsatisfiability results. DRAT proofs act as logs of clause inferences and clause deletions in the solver. The non-monotonic nature of the proof system makes deletions relevant. State-of-the-art proof checkers ignore deletions of unit clauses, differing from the standard in meaningful ways that require adaptions when proofs are generated or used for purposes other than checking. On the other hand, dealing with unit deletions in the proof checker breaks many of the usual invariants used for efficiency reasons. Furthermore, many SAT solvers introduce spurious unit deletions in proofs. These deletions are never intended to be applied in the checker but are nevertheless introduced, making many proofs generated by state-of-the-art solvers incorrect. We present the first competitive DRAT checker that honors unit deletions, as well as fixes for the spurious deletion issue in proof generation. Our experimental results confirm that unit deletions can be applied with similar average performance to state-of-the-art checkers. We also confirm that a large fraction of the proofs generated during the last SAT solving competition do not respect the DRAT standard. This result was confirmed with proof incorrectness certificates that were independently validated. We find that our proof incorrectness certificates can be of help when debugging SAT solvers and DRAT checkers.) <|cite_end|> <|cite_start|> (Reference: Certifying Parity Reasoning Efficiently Using Pseudo-Boolean Proofs: The dramatic improvements in combinatorial optimization algorithms over the last decades have had a major impact in artificial intelligence, operations research, and beyond, but the output of current state-of-the-art solvers is often hard to verify and is sometimes wrong. For Boolean satisfiability (SAT) solvers proof logging has been introduced as a way to certify correctness, but the methods used seem hard to generalize to stronger paradigms. What is more, even for enhanced SAT techniques such as parity (XOR) reasoning, cardinality detection, and symmetry handling, it has remained beyond reach to design practically efficient proofs in the standard DRAT format. In this work, we show how to instead use pseudo-Boolean inequalities with extension variables to concisely justify XOR reasoning. Our experimental evaluation of a SAT solver integration shows a dramatic decrease in proof logging and verification time compared to existing DRAT methods. Since our method is a strict generalization of DRAT, and readily lends itself to expressing also 0-1 programming and even constraint programming problems, we hope this work points the way towards a unified approach for efficient machine-verifiable proofs for a rich class of combinatorial optimization paradigms.) <|cite_end|> <|cite_start|> (Reference: A Flexible Proof Format for SAT Solver-Elaborator Communication: We introduce FRAT, a new proof format for unsatisfiable SAT problems, and its associated toolchain. Compared to DRAT, the FRAT format allows solvers to include more information in proofs to reduce the computational cost of subsequent elaboration to LRAT. The format is easy to parse forward and backward, and it is extensible to future proof methods. The provision of optional proof steps allows SAT solver developers to balance implementation effort against elaboration time, with little to no overhead on solver time. We benchmark our FRAT toolchain against a comparable DRAT toolchain and confirm >84% median reduction in elaboration time and >94% median decrease in peak memory usage.) <|cite_end|>.
Modern proof systems rely on redundancy properties presenting a phenomenon
known as \emph{interference} <|cite_start|> (Reference: Inprocessing Rules: ) <|cite_end|> <|cite_start|> (Reference: The Potential of Interference-Based Proof Systems: We want to encourage researchers to investigate the potential of proof systems that modify a given set of formulas (e.g., a set of clauses in propositional logic) in a way that preserves satisfiability but not necessarily logical equivalence. We call such modifications interferences, because they can change the models of a given set of formulas. Interferences differ from classical inferences, which do not affect the models of a set of formulas, because they only allow the derivation of formulas (conclusions) that are implied by the original formulas (premises). Moreover, while inferences reason about the presence of formulas (the premises), interferences can be seen as reasoning about their absence. Most traditional proof systems such as Frege systems, sequent calculi, or resolution-based systems use conventional inference rules. Popular examples of these rules are the modus ponens (left) and the propositional resolution rule (right):) <|cite_end|> <|cite_start|> (Reference: A Theory of Satisfiability-Preserving Proofs in SAT Solving: We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.) <|cite_end|>.
Whereas traditional proof systems derive clauses that are implied by the premises,
interference-based proof systems merely require introduced clauses to be consistent with them.
Interference proofs preserve the existence of a model throughout the proof, rather than models themselves.
A somewhat counterintuitive semantics thus arises: introducing a clause
in an interference-based proof system does not only depend on the presence of some clauses,
but also on the absence of some other clauses <|cite_start|> (Reference: Towards a Semantics of Unsatisfiability Proofs with Inprocessing: Delete Resolution Asymmetric Tautology (DRAT) proofs have become a de facto standard to certify unsatisfiability results from SAT solvers with inprocessing. However, DRAT shows behaviors notably different from other proof systems: DRAT inferences are nonmonotonic, and clauses that are not consequences of the premises can be derived. In this paper, we clarify some discrepancies on the notions of reverse unit propagation (RUP) clauses and asymmetric tautologies (AT), and furthermore develop the concept of resolution consequences. This allows us to present an intuitive explanation of RAT in terms of permissive definitions. We prove that a formula derived using RATs can be stratified into clause sets depending on which definitions they require, which give a strong invariant along RAT proofs. We furthermore study its interaction with clause deletion, characterizing DRAT derivability as satisfiability-preservation.) <|cite_end|> <|cite_start|> (Reference: A Theory of Satisfiability-Preserving Proofs in SAT Solving: We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.) <|cite_end|>.
The most general interference-based proof system in the literature is known as DSR.
While its predecesor DPR had success in generating short proofs of the pigeonhole formula
without introducing new variables <|cite_start|> (Reference: Short Proofs Without New Variables: ) <|cite_end|>, DSR did not seem to succeeded in improving this result,
despite being intuitively well-suited for it.
In this work, we analyze the semantics of DSR proofs extending previous work on DPR proofs <|cite_start|> (Reference: A Theory of Satisfiability-Preserving Proofs in SAT Solving: We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.) <|cite_end|>.
We find similar results to that article; in particular, satisfiability-preserving DSR proofs
can be reinterpreted as more traditional, DAG-shaped, model-preserving proofs over
an extension of propositional logic with a \emph{mutation} operator.
Crucially, these DAG-shaped proofs remove the whole-formula dependence interference is characterized by,
enabling an easier analysis of the necessary conditions for
satisfiability-preservation.
This analysis hints at a generalization we call
\emph{weak substition redundancy} (WSR \textipa{[\textprimstress w\textsci z\textschwa\textrhoticity]}),
which allows shorter, more understandable, easier to generate, faster to check proofs.
We demonstrate this by giving an even shorter proof of the pigeonhole formula.
We also provide a couple of examples where smaller unsatisfiable cores can be generated during proof checking,
and fewer lemmas are required during proof generation.
\paragraph*{Interference-based proofs}
Much of proof generation and checking is still done in the same way as a couple decades ago, by
logging the sequence of \emph{learnt clauses} in CDCL checkers, sometimes together with antecedents, and checking those
clauses for simple entailment criteria such as \emph{reverse unit propagation}~(RUP) <|cite_start|> (Reference: Verification of proofs of unsatisfiability for CNF formulas: As SAT-algorithms become more and more complex, there is little chance of writing a SAT-solver that is free of bugs. So it is of great importance to be able to verify, the information returned by a SAT-solver. If the CNF formula to be tested is satisfiable, solution verification is trivial and can be easily done by the user. However, in the case of unsatisfiability, the user has to rely on the reputation of the SAT-solver. We describe an efficient procedure for checking the correctness of unsatisfiability proofs. As a by-product, the proposed procedure finds an unsatisfiable core of the initial CNF formula. The efficiency of the proposed procedure was tested on a representative set of large "real-life" CNF formulas from the formal verification domain.) <|cite_end|> <|cite_start|> (Reference: Validating SAT solvers using an independent resolution-based checker: practical implementations and other applications: As the use of SAT solvers as core engines in EDA applications grows, it becomes increasingly important to validate their correctness. In this paper, we describe the implementation of an independent resolution-based checking procedure that can check the validity of unsatisfiable claims produced by the SAT solver zchaff. We examine the practical implementation issues of such a checker and describe two implementations with different pros and cons. Experimental results show low overhead for the checking process. Our checker can work with many other modern SAT solvers with minor modifications, and it can provide information for debugging when checking fails. Finally we describe additional results that can be obtained by the validation process and briefly discuss their applications.) <|cite_end|>.
Other parts of the proof are generated using more advanced deduction techniques;
even their infrequent use can dramatically decrease the size of
generated proofs <|cite_start|> (Reference: Verifying Refutations with Extended Resolution: ) <|cite_end|> <|cite_start|> (Reference: Extended Resolution Simulates DRAT: ) <|cite_end|> <|cite_start|> (Reference: What a Difference a Variable Makes: ) <|cite_end|>,
overcoming not only technical limitations in proof generation,
but also theoretical bounds <|cite_start|> (Reference: The Intractability of Resolution: ) <|cite_end|> <|cite_start|> (Reference: Many hard examples for resolution: For every choice of positive integers <italic>c</italic> and <italic>k</italic> such that <italic>k</italic> ≥ 3 and <italic>c</italic>2<supscrpt>-<italic>k</italic></supscrpt> ≥ 0.7, there is a positive number ε such that, with probability tending to 1 as <italic>n</italic> tends to ∞, a randomly chosen family of <italic>cn</italic> clauses of size <italic>k</italic> over <italic>n</italic> variables is unsatisfiable, but every resolution proof of its unsatisfiability must generate at least (1 + ε)<supscrpt><italic>n</italic></supscrpt> clauses.) <|cite_end|> <|cite_start|> (Reference: The Symmetry Rule in Propositional Logic: ) <|cite_end|>.
Clause deletion information is also recorded in the proof, which is needed to reduce memory
footprint in checking <|cite_start|> (Reference: Bridging the Gap between Easy Generation and Efficient Verification of Unsatisfiability Proofs: Several proof formats have been used to verify refutations produced by satisfiability (SAT) solvers. Existing formats are either costly to check or hard to implement. This paper presents a practical approach that facilitates checking of unsatisfiability results in a time similar to proof discovery by embedding clause deletion information into clausal proofs. By exploiting this information, the proof‐checking time is reduced by an order of magnitude on medium‐to‐hard benchmarks as compared to checking proofs using similar clausal formats. Proofs in a new format can be produced by making only minor changes to existing conflict‐driven clause‐learning solvers and their preprocessors, and the runtime overhead is negligible. This approach can easily be integrated into Glucose 2.1, the SAT 2012 challenge winner, and SatELite, a popular SAT‐problem preprocessor. Copyright © 2014 John Wiley & Sons, Ltd.) <|cite_end|>.
Much research has been invested on finding ever more powerful proof rules <|cite_start|> (Reference: Inprocessing Rules: ) <|cite_end|> <|cite_start|> (Reference: Short Proofs Without New Variables: ) <|cite_end|>
that allow to succintly express inprocessing techniques such as
Gaussian elimination <|cite_start|> (Reference: Extending SAT Solver with Parity Reasoning: Aalto University, P.O. Box 11000, FI-00076 Aalto www.aalto.fi Author Tero Laitinen Name of the doctoral dissertation Extending SAT Solver with Parity Reasoning Publisher School of Science Unit Department of Information and Computer Science Series Aalto University publication series DOCTORAL DISSERTATIONS 177/2014 Field of research Theoretical Computer Science Manuscript submitted 10 September 2014 Date of the defence 21 November 2014 Permission to publish granted (date) 29 October 2014 Language English Monograph Article dissertation (summary + original articles) Abstract Propositional conflict-driven clause-learning (CDCL) satisfiability (SAT) solvers have been successfully applied in a number of industrial domains. In some application areas such as circuit verification, bounded model checking, logical cryptanalysis, and approximate model counting, some requirements can be succinctly captured with parity (xor) constraints. However, satisfiability solvers that typically operate in conjunctive normal form (CNF) may perform poorly with straightforward translation of parity constraints to CNF.Propositional conflict-driven clause-learning (CDCL) satisfiability (SAT) solvers have been successfully applied in a number of industrial domains. In some application areas such as circuit verification, bounded model checking, logical cryptanalysis, and approximate model counting, some requirements can be succinctly captured with parity (xor) constraints. However, satisfiability solvers that typically operate in conjunctive normal form (CNF) may perform poorly with straightforward translation of parity constraints to CNF. This work studies how CDCL SAT solvers can be enhanced to handle problems with parity constraints using the recently introduced DPLL(XOR) framework where the SAT solver is coupled with a parity constraint solver module. Different xor-deduction systems ranging from plain unit propagation through equivalence reasoning to complete incremental Gauss-Jordan elimination are presented. Techniques to analyze xor-deduction system derivations are developed, allowing one to obtain smaller clausal explanations for implied literals and also to learn new parity constraints in the conflict analysis process. It is proven that these techniques can be used to simulate a complete xor-deduction system on a restricted class of instances and allow very short unsatisfiability proofs for some formulas whose CNF translations are hard for resolution. Fast approximating tests to detect whether unit propagation or equivalence reasoning is enough to deduce all implied literals are presented. Methods to decompose sets of parity constraints into subproblems that can be handled separately are developed. The decomposition methods can greatly reduce the size of parity constraint matrices when using GaussJordan elimination on dense matrices and allow one to choose appropriate xor-deduction system for each subproblem. Efficient translations to simulate equivalence reasoning and stronger parity reasoning are developed. It is shown that equivalence reasoning can be simulated by adding a polynomial amount of redundant parity constraints to the problem, but without using additional variables, an exponential number of parity constraints are needed in the worst case. It is proven that resolution simulates equivalence reasoning efficiently. The presented techniques are experimentally evaluated on a variety of challenging problems originating from a number of encryption ciphers and from SAT Competition benchmark instances.) <|cite_end|> <|cite_start|> (Reference: Enhanced Gaussian Elimination in DPLL-based SAT Solvers: When cryptographical problems are treated in SAT solvers, they often contain large set of XOR constraints. Treating these XOR constraints through on-the-fly Gaussian elimination during solving has been shown to be a viable approach by Soos et al.[16]. We describe various enhancements to this scheme which increase the performance and mostly eliminate the need for manual tuning of parameters. With these enhancements, we were able achieve speedups of up to 29% on the Bivium and up to 45% on the Trivium ciphers, contrary to the 1-5% speedup achieved by the original scheme.) <|cite_end|> <|cite_start|> (Reference: Sorting Parity Encodings by Reusing Variables: ) <|cite_end|> <|cite_start|> (Reference: Certifying Parity Reasoning Efficiently Using Pseudo-Boolean Proofs: The dramatic improvements in combinatorial optimization algorithms over the last decades have had a major impact in artificial intelligence, operations research, and beyond, but the output of current state-of-the-art solvers is often hard to verify and is sometimes wrong. For Boolean satisfiability (SAT) solvers proof logging has been introduced as a way to certify correctness, but the methods used seem hard to generalize to stronger paradigms. What is more, even for enhanced SAT techniques such as parity (XOR) reasoning, cardinality detection, and symmetry handling, it has remained beyond reach to design practically efficient proofs in the standard DRAT format. In this work, we show how to instead use pseudo-Boolean inequalities with extension variables to concisely justify XOR reasoning. Our experimental evaluation of a SAT solver integration shows a dramatic decrease in proof logging and verification time compared to existing DRAT methods. Since our method is a strict generalization of DRAT, and readily lends itself to expressing also 0-1 programming and even constraint programming problems, we hope this work points the way towards a unified approach for efficient machine-verifiable proofs for a rich class of combinatorial optimization paradigms.) <|cite_end|> or
symmetry breaking <|cite_start|> (Reference: Solving difficult instances of boolean satisfiability in the presence of symmetry: Research in algorithms for Boolean satisfiability (SAT) and their implementations (Goldberg and Novikov, 2002), (Moskewicz et al., 2001), (Silva and Sakallah, 1999) has recently outpaced benchmarking efforts. Most of the classic DIMACS benchmarks (ftp:dimacs.rutgers.edu/pub/challenge/sat/benchmarks/cnf ) can now be solved in seconds on commodity PCs. More recent benchmarks (Velev and Bryant, 2001) take longer to solve due to their large size, but are still solved in minutes. Yet, relatively small and difficult SAT instances must exist if P /spl ne/ NP. To this end, our paper articulates SAT instances that are unusually difficult for their size, including satisfiable instances derived from very large scale integration (VLSI) routing problems. With an efficient implementation to solve the graph automorphism problem (McKay, 1990), (Soicher, 1993) (Spitznagel, 1994), we show that in structured SAT instances, difficulty may be associated with large numbers of symmetries. We point out that a previously published symmetry extraction mechanism (Crawford et al., 1996) based on a reduction to the graph automorphism problem often produces many spurious symmetries. Our paper contributes two new reductions to graph automorphism, which extract all correct symmetries found previously (Crawford et al., 1996) as well as phase-shift symmetries not found earlier. The correctness of our reductions is rigorously proven, and they are evaluated empirically. We also formulate an improved construction of symmetry-breaking clauses in terms of permutation cycles and propose to use only generators of symmetries in this process. These ideas are implemented in a fully automated flow that first extracts symmetries from a given SAT instance, preprocesses it by adding symmetry-breaking clauses, and then calls a state-of-the-art backtrack SAT solver. Significant speed-ups are shown on many benchmarks versus direct application of the solver. In an attempt to further improve the practicality of our approach, we propose a scheme for fast "opportunistic" symmetry extraction and also show that considerations of symmetry may lead to more efficient reductions to SAT in the VLSI routing domain.) <|cite_end|> <|cite_start|> (Reference: Efficient Symmetry Breaking for Boolean Satisfiability: Identifying and breaking the symmetries of conjunctive normal form (CNF) formulae has been shown to lead to significant reductions in search times. Symmetries in the search space are broken by adding appropriate symmetry-breaking predicates (SBPs) to an SAT instance in CNF. The SBPs prune the search space by acting as a filter that confines the search to nonsymmetric regions of the space without affecting the satisfiability of the CNF formula. For symmetry breaking to be effective in practice, the computational overhead of generating and manipulating SBPs must be significantly less than the runtime savings they yield due to search space pruning. In this paper, we describe a more systematic and efficient construction of SBPs. In particular, we use the cycle structure of symmetry generators, which typically involve very few variables, to drastically reduce the size of SBPs. Furthermore, our new SBP construction grows linearly with the number of relevant variables as opposed to the previous quadratic constructions. Our empirical data suggest that these improvements reduce search runtimes by one to two orders of magnitude on a wide variety of benchmarks with symmetries.) <|cite_end|> <|cite_start|> (Reference: Expressing Symmetry Breaking in DRAT Proofs: ) <|cite_end|>.
These proof rules are collectively called \emph{interference-based rules},
since their derivation depends on the whole formula
rather than just on the presence of some specific clauses <|cite_start|> (Reference: Inprocessing Rules: ) <|cite_end|> <|cite_start|> (Reference: The Potential of Interference-Based Proof Systems: We want to encourage researchers to investigate the potential of proof systems that modify a given set of formulas (e.g., a set of clauses in propositional logic) in a way that preserves satisfiability but not necessarily logical equivalence. We call such modifications interferences, because they can change the models of a given set of formulas. Interferences differ from classical inferences, which do not affect the models of a set of formulas, because they only allow the derivation of formulas (conclusions) that are implied by the original formulas (premises). Moreover, while inferences reason about the presence of formulas (the premises), interferences can be seen as reasoning about their absence. Most traditional proof systems such as Frege systems, sequent calculi, or resolution-based systems use conventional inference rules. Popular examples of these rules are the modus ponens (left) and the propositional resolution rule (right):) <|cite_end|> <|cite_start|> (Reference: Towards a Semantics of Unsatisfiability Proofs with Inprocessing: Delete Resolution Asymmetric Tautology (DRAT) proofs have become a de facto standard to certify unsatisfiability results from SAT solvers with inprocessing. However, DRAT shows behaviors notably different from other proof systems: DRAT inferences are nonmonotonic, and clauses that are not consequences of the premises can be derived. In this paper, we clarify some discrepancies on the notions of reverse unit propagation (RUP) clauses and asymmetric tautologies (AT), and furthermore develop the concept of resolution consequences. This allows us to present an intuitive explanation of RAT in terms of permissive definitions. We prove that a formula derived using RATs can be stratified into clause sets depending on which definitions they require, which give a strong invariant along RAT proofs. We furthermore study its interaction with clause deletion, characterizing DRAT derivability as satisfiability-preservation.) <|cite_end|> <|cite_start|> (Reference: A Theory of Satisfiability-Preserving Proofs in SAT Solving: We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.) <|cite_end|>.
One of the most general interference techniques is \emph{substitution redundancy} (SR), which allows a version of
reasoning without loss of generality; this technique has been recently lifted to
pseudo-Boolean reasoning with impressive results <|cite_start|> (Reference: Certifying Parity Reasoning Efficiently Using Pseudo-Boolean Proofs: The dramatic improvements in combinatorial optimization algorithms over the last decades have had a major impact in artificial intelligence, operations research, and beyond, but the output of current state-of-the-art solvers is often hard to verify and is sometimes wrong. For Boolean satisfiability (SAT) solvers proof logging has been introduced as a way to certify correctness, but the methods used seem hard to generalize to stronger paradigms. What is more, even for enhanced SAT techniques such as parity (XOR) reasoning, cardinality detection, and symmetry handling, it has remained beyond reach to design practically efficient proofs in the standard DRAT format. In this work, we show how to instead use pseudo-Boolean inequalities with extension variables to concisely justify XOR reasoning. Our experimental evaluation of a SAT solver integration shows a dramatic decrease in proof logging and verification time compared to existing DRAT methods. Since our method is a strict generalization of DRAT, and readily lends itself to expressing also 0-1 programming and even constraint programming problems, we hope this work points the way towards a unified approach for efficient machine-verifiable proofs for a rich class of combinatorial optimization paradigms.) <|cite_end|>.
\paragraph*{Substitution redundancy and the pigeonhole problem}
A previous version of SR, called \emph{propagation redundancy}~(PR) <|cite_start|> (Reference: Short Proofs Without New Variables: ) <|cite_end|>, was successful
in achieving short proofs of the pigeonhole problem, known for having exponential proofs in resolution <|cite_start|> (Reference: The Intractability of Resolution: ) <|cite_end|>
and polynomial yet cumbersome proofs in extended resolution <|cite_start|> (Reference: A short proof of the pigeon hole principle using extended resolution: asserts intuitively that there is a one-one map from set of clauses must be inconsistent. Several years ago, Dick Karp (for one) noticed that there didn't seem to be any short (i.e. polynomial in n) resolution refutation of the set Sn, and posed the problem of trying to prove this. In fact, I believe the shortest resolution refutation known for S has n (n-l)(n+2)2 n-3 clauses, but no one has been able to prove a non-polynomial lower bound on an arbitrary resolution refutation of S n After reading Tseitin's paper [I] describing extended resolution (ER), the question arose whether there exists a short ER refutation of S n. It turns out that such a short refutation does exist, and it is the purpose of this note to describe it and show briefly how it motivated my paper [2] on feasibly constructive proofs.) <|cite_end|>.
The proof from <|cite_start|> (Reference: Short Proofs Without New Variables: ) <|cite_end|> can be understood in terms of reasoning without loss of
generality <|cite_start|> (Reference: A Theory of Satisfiability-Preserving Proofs in SAT Solving: We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.) <|cite_end|>: it assumes that a given pigeon is in a given pigeonhole,
for otherwise we could swap pigeons around.
PR does not have a method to swap the values of variables;
rather, it can only conditionally set them to true or false.
Hence, linearly many reasoning steps are needed to just to achieve the swap.
SR, on the other hand, allows variable swaps, so one could expect that the clause expressing the result
of this swap would satisfy the SR property. Surprisingly, it does not;
in fact, the clause fails to satisfy a requirement that
in its PR version was almost trivial.
\paragraph*{Interference and logical dependency}
Interference-based proofs do not have a ``dependence'' or ``procedence'' structure:
since the ability to introduce a clause is contingent on the whole formula,
no notion of ``antecedents'' exists for SR and its predecessors.
This becomes a problem when computing unsatisfiable cores and trimmed proofs <|cite_start|> (Reference: Efficient MUS Enumeration of Horn Formulae with Applications to Axiom Pinpointing: ) <|cite_end|>;
it also has the potential
to harm the performance of proof checkers, since some techniques that allow skipping unnecessary steps
during proof checking are based on logical dependence <|cite_start|> (Reference: Trimming While Checking Clausal Proofs: Conflict-driven clause learning (CDCL) satisfiability solvers can emit more than a satisfiability result; they can also emit clausal proofs, resolution proofs, unsatisfiable cores, and Craig interpolants. Such additional results may require substantial modifications to a solver, especially if preprocessing and inprocessing techniques are used; however, CDCL solvers can easily emit clausal proofs with very low overhead. We present a new approach with an associated tool that efficiently validates clausal proofs and can distill additional results from clausal proofs. Our tool architecture makes it easy to obtain such results from any CDCL solver. Experimental evaluation shows that our tool can validate clausal proofs faster than existing tools. Additionally, the quality of the additional results, such as unsatisfiable cores, is higher when compared to modified SAT solvers.) <|cite_end|>.
This also relates to an issue arising when generating proof fragments for inprocessing techniques.
Sometimes, a clause $C$ cannot be introduced as SR because some lemmas are needed;
the proof generator might know these lemmas and how to derive them.
However, because interference depends on the whole formula,
introducing the lemmas before $C$ can further constrain the requirements
for $C$ to be introduced, demanding yet more lemmas.
\paragraph*{Contributions}
Previous work showed that the semantics of PR can be expressed in terms of \emph{overwrite logic} <|cite_start|> (Reference: A Theory of Satisfiability-Preserving Proofs in SAT Solving: We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.) <|cite_end|>.
Overwrite logic extends propositional logic with an \emph{overwrite operator}.
Within overwrite logic, DPR proofs can be regarded as DAG-shaped, model-preserving proofs;
PR introduction can then be shown to behave as reasoning without loss of generality.
In Section~\ref{sec:mutation} we provide an extension to the overwrite logic framework, called \emph{mutation logic},
which elucidates the semantics of DSR proofs. In particular,
model-preserving proofs within mutation logic mimicking satisfiability-preserving DSR proofs
can be extracted, as shown in Section~\ref{ssc:entailment}. This allows a clearer understanding of the SR redundancy rule,
which in turn makes some improvements over SR apparent.
By introducing minor modifications to the definition of SR, in Section~\ref{sec:extensions} we obtain a new,
more powerful redundancy rule called \emph{weak substitution redundancy}~(WSR).
WSR proofs are more succint than DSR proofs, which we demonstrate
by providing a shorter proof of the pigeonhole problem using only $O(n^2)$ clause introductions in Section~\ref{ssc:php}.
Furthermore, WSR enables finer-grained ways to reason about dependency in interference-based proofs.
This can yield shorter proof checking runtimes and smaller trimmed proofs and unsatisfiability cores
when SR clauses are used (Section~\ref{ssc:cores}), as well as easier proof generation techniques by providing
clearer separation for interference lemmas (Section~\ref{ssc:lemmas}). <|paper_end|> | [
"<|reference_start|> Schur Number Five: We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number $n$ such that there exists a five-coloring of the positive numbers up to $n$ without a monochromatic solution of the equation $a + b = c$? We obtained the solution, $n = 160$, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems. <|reference_end|>",
"<|reference_start|> Inprocessing Rules: <|reference_end|>",
"<|reference_start|> Many hard examples for resolution: For every choice of positive integers <italic>c</italic> and <italic>k</italic> such that <italic>k</italic> ≥ 3 and <italic>c</italic>2<supscrpt>-<italic>k</italic></supscrpt> ≥ 0.7, there is a positive number ε such that, with probability tending to 1 as <italic>n</italic> tends to ∞, a randomly chosen family of <italic>cn</italic> clauses of size <italic>k</italic> over <italic>n</italic> variables is unsatisfiable, but every resolution proof of its unsatisfiability must generate at least (1 + ε)<supscrpt><italic>n</italic></supscrpt> clauses. <|reference_end|>",
"<|reference_start|> Inprocessing Rules: <|reference_end|>"
] | [
1,
8,
21,
33
] | {"<|multi_cite_1_1|>": "arxiv-97157", "<|multi_cite_1_2|>": "arxiv-140877", "<|multi_cite_2_1|>": "ss-1536892", "<|multi_cite_2_4|>": "ss-1359705", "<|multi_cite_2_5|>": "ss-1841765", "<|multi_cite_2_6|>": "ss-1841766", "<|multi_cite_2_8|>": "arxiv-448646", "<|multi_cite_2_9|>": "ss-1841767", "<|multi_cite_3_1|>": "ss-1380624", "<|multi_cite_3_2|>": "ss-1841768", "<|multi_cite_3_3|>": "ss-1841769", "<|multi_cite_4_1|>": "ss-1841770", "<|multi_cite_4_2|>": "ss-1841769", "<|cite_6|>": "ss-1359705", "<|cite_7|>": "ss-1841769", "<|multi_cite_8_1|>": "ss-1536892", "<|multi_cite_8_2|>": "ss-1841771", "<|multi_cite_9_1|>": "ss-1841772", "<|multi_cite_9_3|>": "ss-1360009", "<|multi_cite_9_4|>": "ss-772185", "<|multi_cite_10_1|>": "ss-902058", "<|multi_cite_10_2|>": "ss-1286353", "<|multi_cite_10_3|>": "ss-1448119", "<|cite_11|>": "ss-1536891", "<|multi_cite_12_1|>": "ss-1380624", "<|multi_cite_12_2|>": "ss-1359705", "<|multi_cite_13_1|>": "ss-1841773", "<|multi_cite_13_2|>": "ss-1607163", "<|multi_cite_13_4|>": "ss-1841774", "<|multi_cite_13_5|>": "arxiv-448646", "<|multi_cite_14_1|>": "ss-1697381", "<|multi_cite_14_2|>": "ss-1841775", "<|multi_cite_14_3|>": "ss-1360006", "<|multi_cite_15_1|>": "ss-1380624", "<|multi_cite_15_2|>": "ss-1841768", "<|multi_cite_15_3|>": "ss-1841770", "<|multi_cite_15_4|>": "ss-1841769", "<|cite_17|>": "arxiv-448646", "<|cite_18|>": "ss-1359705", "<|cite_19|>": "ss-902058", "<|cite_20|>": "ss-2338089", "<|cite_21|>": "ss-1359705", "<|cite_22|>": "ss-1841769", "<|cite_23|>": "ss-1531815", "<|cite_24|>": "ss-956371", "<|cite_25|>": "ss-1841769"} |
2203.14905 | <|paper_start|> Title: Non-Parametric Stochastic Policy Gradient with Strategic Retreat for Non-Stationary Environment
Abstract: Non-Parametric Stochastic Policy Gradient with Strategic Retreat for Non-Stationary Environment: In modern robotics, effectively computing optimal control policies under dynamically varying environments poses substantial challenges to the off-the-shelf parametric policy gradient methods, such as the Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic policy gradient (TD3). In this paper, we propose a systematic methodology to dynamically learn a sequence of optimal control policies non-parametrically, while autonomously adapting with the constantly changing environment dynamics. Specifically, our non-parametric kernel-based methodology embeds a policy distribution as the features in a non-decreasing Euclidean space, therefore allowing its search space to be defined as a very high (possible infinite) dimensional RKHS (Reproducing Kernel Hilbert Space). Moreover, by leveraging the similarity metric computed in RKHS, we augmented our non-parametric learning with the technique of AdaptiveH- adaptively selecting a time-frame window of finishing the optimal part of whole action-sequence sampled on some preceding observed state. To validate our proposed approach, we conducted extensive experiments with multiple classic benchmarks and one simulated robotics benchmark equipped with dynamically changing environments. Overall, our methodology has outperformed the well-established DDPG and TD3 methodology by a sizeable margin in terms of learning performance.
Introduction
Reinforcement Learning (RL) has revolutionized the process of learning optimal
action mapping policies in diverse problem domains
ranging from video games <|cite_start|> (Reference: Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.) <|cite_end|> to robotic navigation <|cite_start|> (Reference: How to Train Your Robot with Deep Reinforcement Learning: Lessons We Have Learned: Deep reinforcement learning (RL) has emerged as a promising approach for autonomously acquiring complex behaviors from low-level sensor observations. Although a large portion of deep RL research has focused on applications in video games and simulated control, which does not connect with the constraints of learning in real environments, deep RL has also demonstrated promise in enabling physical robots to learn complex skills in the real world. At the same time, real-world robotics provides an appealing domain for evaluating such algorithms, as it connects directly to how humans learn: as an embodied agent in the real world. Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains. In this review article, we present a number of case studies involving robotic deep RL. Building off of these case studies, we discuss commonly perceived challenges in deep RL and how they have been addressed in these works. We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting and are not often the focus of mainstream RL research. Our goal is to provide a resource both for roboticists and machine learning researchers who are interested in furthering the progress of deep RL in the real world.) <|cite_end|>.
Among many Deep Reinforcement Learning (DRL) algorithms,
DDPG (Deep Deterministic Policy Gradient) <|cite_start|> (Reference: Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.) <|cite_end|> and its upgraded variant TD3 (Twin Delayed Deep Deterministic Policy Gradient) <|cite_start|> (Reference: Addressing Function Approximation Error in Actor-Critic Methods: In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.) <|cite_end|>,
have attained immense successes in robotics
through
strategically combining optimal
deterministic policy search algorithms and value-function learning.
Despite many successes of DRL in robotic learning,
the vast majority of these studies, however, restrictively assume
the model of their operating environments to be static.
In fact, while extensive research
has been conducted
on policy gradient methods
in order to
handle real-world high dimensional continuous space environments,
much less attention has been dedicated to formulate algorithms to handle non-stationary
systems.
Unfortunately, real-world environments in robotics are
typically nonstationary and evolving,
therefore governed by dynamically changing transition dynamics
due to some totally imperceptible causes.
As such,
learning optimal control policy under dynamically varying environments
with conventional methodologies, such as DDPG or TD3,
becomes both computationally intensive and quite time-consuming.
Specifically,
while interplaying with a dynamically {\em self-altering} environment, actions
suggested by a deterministic policy based on the preceding state observation
become outdated as the state-space is continuously evolving. On the contrary,
by learning a stochastic policy in a continuously evolving environment, more
probable actions can be sampled from the policy. Moreover, in case of
deterministic way of learning, it is assumed that a stationary transition
dynamics and reward distribution exist in the environment <|cite_start|> (Reference: Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.) <|cite_end|> <|cite_start|> (Reference: Deterministic Policy Gradient Algorithms: In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.) <|cite_end|>. As a result, deterministic policy gradient method performs suboptimally, when the
learning algorithm is exposed to problems where transition functions and reward
distributions are changing in a non-deterministic fashion. Comparatively, a
probabilistic approach for action sampling from a policy distribution can
suggest actions compatible to handle the stochasticity of the dynamic
environment. In particular, probabilistic algorithms for handling optimization
problems or perception learning often exhibit robustness to sensory
irregularities, random perturbations and stochastic nature of environments and
scale much better to the complex environments, where the agent need to handle
unknown uncertainty in much robust and adaptive way <|cite_start|> (Reference: Probabilistic Algorithms in Robotics: This article describes a methodology for programming robots known as probabilistic robotics. The probabilistic paradigm pays tribute to the inherent uncertainty in robot perception, relying on explicit representations of uncertainty when determining what to do. This article surveys some of the progress in the field, using in-depth examples to illustrate some of the nuts and bolts of the basic approach. My central conjecture is that the probabilistic approach to robotics scales better to complex real-world applications than approaches that ignore a robot's uncertainty.) <|cite_end|>.
To overcome all these thorny challenges fundamentally caused by
the non-stationarity of underlying environment,
we formulate the scheme of
adaptively learning sequence of action-mapping functionals
by embedding the functionals into a vector-valued kernel space instead of a fixed parametric
space and representing policy distributions defined over a high (possible
infinite) dimensional RKHS <|cite_start|> (Reference: {Modelling Policies in MDPs in Reproducing Kernel Hilbert Space: We consider modelling policies for MDPs in (vector-valued) reproducing kernel Hilbert function spaces (RKHS). This enables us to work “non-parametrically” in a rich function class, and provides the ability to learn complex policies. We present a framework for performing gradientbased policy optimization in the RKHS, deriving the functional gradient of the return for our policy, which has a simple form and can be estimated efficiently. The policy representation naturally focuses on the relevant region of state space defined by the policy trajectories, and does not rely on a-priori defined basis points; this can be an advantage in high dimensions where suitable basis points may be difficult to define a-priori. The method is adaptive in the sense that the policy representation will naturally adapt to the complexity of the policy being modelled, which is achieved with standard efficient sparsification tools in an RKHS. We argue that finding a good kernel on states can be easier then remetrizing a high dimensional feature space. We demonstrate the approach on benchmark domains and a simulated quadrocopter navigation task.) <|cite_end|>.
In this paper, we develop a new class of non-parametric policy gradient methods <|cite_start|> (Reference: Policy Gradient for Continuing Tasks in Non-stationary Markov Decision Processes: Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities. In this paper we consider the problem of finding optimal policies assuming that they belong to a reproducing kernel Hilbert space (RKHS). To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy. A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed. Hence preventing these algorithms to be fully implemented online, which is a desirable property for systems that need to adapt to new tasks and/or environments in deployment. The main requirement for a policy gradient algorithm to work is that the estimate of the gradient at any point in time is an ascent direction for the initial value function. In this work we establish that indeed this is the case which enables to show the convergence of the online algorithm to the critical points of the initial value function. A numerical example shows the ability of our online algorithm to learn to solve a navigation and surveillance problem, in which an agent must loop between to goal locations. This example corroborates our theoretical findings about the ascent directions of subsequent stochastic gradients. It also shows how the agent running our online algorithm succeeds in learning to navigate, following a continuing cyclic trajectory that does not comply with the standard stationarity assumptions in the literature for non episodic training.) <|cite_end|>
which optimally ensures a gradient ascending direction while optimizing and
adaptively executing such actions compatible with the concurrent
dynamics of the environment.
We validate both theoretically and experimentally that our proposed approach
can handle non-stationary environments with hidden, but bounded, degree of dynamic evolution. In most cases, such
an upper-bound on the non-stationary transition dynamics is reasonably ensured
by considering that the evolution rate of the non-stationary MDPs are bounded
by Lipschitz Continuity assumption <|cite_start|> (Reference: Policy Gradient for Continuing Tasks in Non-stationary Markov Decision Processes: Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities. In this paper we consider the problem of finding optimal policies assuming that they belong to a reproducing kernel Hilbert space (RKHS). To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy. A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed. Hence preventing these algorithms to be fully implemented online, which is a desirable property for systems that need to adapt to new tasks and/or environments in deployment. The main requirement for a policy gradient algorithm to work is that the estimate of the gradient at any point in time is an ascent direction for the initial value function. In this work we establish that indeed this is the case which enables to show the convergence of the online algorithm to the critical points of the initial value function. A numerical example shows the ability of our online algorithm to learn to solve a navigation and surveillance problem, in which an agent must loop between to goal locations. This example corroborates our theoretical findings about the ascent directions of subsequent stochastic gradients. It also shows how the agent running our online algorithm succeeds in learning to navigate, following a continuing cyclic trajectory that does not comply with the standard stationarity assumptions in the literature for non episodic training.) <|cite_end|>, <|cite_start|> (Reference: Non-Stationary Markov Decision Processes, a Worst-Case Approach using Model-Based Reinforcement Learning, Extended version: This work tackles the problem of robust zero-shot planning in non-stationary stochastic environments. We study Markov Decision Processes (MDPs) evolving over time and consider Model-Based Reinforcement Learning algorithms in this setting. We make two hypotheses: 1) the environment evolves continuously with a bounded evolution rate; 2) a current model is known at each decision epoch but not its evolution. Our contribution can be presented in four points. 1) we define a specific class of MDPs that we call Non-Stationary MDPs (NSMDPs). We introduce the notion of regular evolution by making an hypothesis of Lipschitz-Continuity on the transition and reward functions w.r.t. time; 2) we consider a planning agent using the current model of the environment but unaware of its future evolution. This leads us to consider a worst-case method where the environment is seen as an adversarial agent; 3) following this approach, we propose the Risk-Averse Tree-Search (RATS) algorithm, a zero-shot Model-Based method similar to Minimax search; 4) we illustrate the benefits brought by RATS empirically and compare its performance with reference Model-Based algorithms.) <|cite_end|>.
{\em Why Non-Parametric Reinforcement Learning?} ---
In a real-world robotic setting, policy computation and policy inference
possess certain latency and it is impossible to pause the environment in the
continuous stream of operating phase.
So, an adaptive learning framework is highly required to handle
dynamically varying transition distribution and reward function through
updating policy distribution in a more frequent policy update fashion.
But unfortunately, in parametric setting like DDPG, finding a gradient similarity
metric to find an adaptive termination point for frequent policy updates is
mathematically very difficult and often gradients can not be easily estimated,
which can be easily done in an RKHS based non-parametric learning
framework <|cite_start|> (Reference: {Modelling Policies in MDPs in Reproducing Kernel Hilbert Space: We consider modelling policies for MDPs in (vector-valued) reproducing kernel Hilbert function spaces (RKHS). This enables us to work “non-parametrically” in a rich function class, and provides the ability to learn complex policies. We present a framework for performing gradientbased policy optimization in the RKHS, deriving the functional gradient of the return for our policy, which has a simple form and can be estimated efficiently. The policy representation naturally focuses on the relevant region of state space defined by the policy trajectories, and does not rely on a-priori defined basis points; this can be an advantage in high dimensions where suitable basis points may be difficult to define a-priori. The method is adaptive in the sense that the policy representation will naturally adapt to the complexity of the policy being modelled, which is achieved with standard efficient sparsification tools in an RKHS. We argue that finding a good kernel on states can be easier then remetrizing a high dimensional feature space. We demonstrate the approach on benchmark domains and a simulated quadrocopter navigation task.) <|cite_end|>, <|cite_start|> (Reference: Policy Gradient for Continuing Tasks in Non-stationary Markov Decision Processes: Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities. In this paper we consider the problem of finding optimal policies assuming that they belong to a reproducing kernel Hilbert space (RKHS). To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy. A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed. Hence preventing these algorithms to be fully implemented online, which is a desirable property for systems that need to adapt to new tasks and/or environments in deployment. The main requirement for a policy gradient algorithm to work is that the estimate of the gradient at any point in time is an ascent direction for the initial value function. In this work we establish that indeed this is the case which enables to show the convergence of the online algorithm to the critical points of the initial value function. A numerical example shows the ability of our online algorithm to learn to solve a navigation and surveillance problem, in which an agent must loop between to goal locations. This example corroborates our theoretical findings about the ascent directions of subsequent stochastic gradients. It also shows how the agent running our online algorithm succeeds in learning to navigate, following a continuing cyclic trajectory that does not comply with the standard stationarity assumptions in the literature for non episodic training.) <|cite_end|>. Parametric methods also
impose two major disadvantages. Firstly, for a parametric model like DDPG, it
is very difficult
to initialize a suitable prior parameter
matrix as picking too large matrix can be computationally expensive or picking
too small will be unable to fit the complex policies need to learn for action
mapping <|cite_start|> (Reference: {Modelling Policies in MDPs in Reproducing Kernel Hilbert Space: We consider modelling policies for MDPs in (vector-valued) reproducing kernel Hilbert function spaces (RKHS). This enables us to work “non-parametrically” in a rich function class, and provides the ability to learn complex policies. We present a framework for performing gradientbased policy optimization in the RKHS, deriving the functional gradient of the return for our policy, which has a simple form and can be estimated efficiently. The policy representation naturally focuses on the relevant region of state space defined by the policy trajectories, and does not rely on a-priori defined basis points; this can be an advantage in high dimensions where suitable basis points may be difficult to define a-priori. The method is adaptive in the sense that the policy representation will naturally adapt to the complexity of the policy being modelled, which is achieved with standard efficient sparsification tools in an RKHS. We argue that finding a good kernel on states can be easier then remetrizing a high dimensional feature space. We demonstrate the approach on benchmark domains and a simulated quadrocopter navigation task.) <|cite_end|>. Secondly, when the dimension of problem goes
higher, the policy update in algorithm like DDPG grows exponentially. Such
update can cause exploding gradients while dealing with unbounded system
variances. To address theses issues, non-parametric policy learning leverages
transition distributions as kernel embeddings in an RKHS whose complexity
remains linear with dimensionality of problem space. Non-parametric
representation of control policies can be adaptive as RKHS based kernel trick
enables the agent to learn complex, but sufficient, representation of stochastic control policies when required.
{\em Why Strategic Retreats or Adaptive-H?} ---
Unfortunately, kernel based non-parametric learning possesses one major
disadvantage. Non-parametric method requires a lot of training data to
accurately model the action-mapping functions. RL itself is a data-hungry
method. As a result, the kernel matrix, often stated as Gram Matrix <|cite_start|> (Reference: Learning the Kernel Matrix with Semi-Definite Programming: Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space—classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via Semi-Definite Programming techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm—using the labelled part of the data one can learn an “optimal” embedding also for the unlabelled part. The induced similarity between test points is learned by using training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Finally, the novel approach presented in the paper is supported by positive empirical results.) <|cite_end|>
becomes huge in dimension and, in turn, brings the problem of {\em
memory-explosion}. Thus, the learning rate for optimization becomes very slow
and kernel centers in the kernel dictionary increase exponentially with more
experiences from the environment. Since, a part of sampled
action-trajectory becomes obsolete as the state-space evolves, the problem of memory-explosion can be prevented by
adaptively presetting a termination trigger to stop {\em non-optimal} action
execution in an episodic trajectory and perform policy update. Thus, the algorithm can be made truly
scalable for very large dimensional problems and can be sample-efficient by truncating the full trajectories at optimal timing.
We term this new formulation in accordance with a non-parametric policy
gradient as \texttt{AdaptiveH}
{\em Why Combining RKHS Method and Strategic Retreats?} ---
The policy gradients with respect to action-mapping mean functional can be
easily and efficiently estimated in an RKHS following Fr\'echet
Derivative <|cite_start|> (Reference: {Modelling Policies in MDPs in Reproducing Kernel Hilbert Space: We consider modelling policies for MDPs in (vector-valued) reproducing kernel Hilbert function spaces (RKHS). This enables us to work “non-parametrically” in a rich function class, and provides the ability to learn complex policies. We present a framework for performing gradientbased policy optimization in the RKHS, deriving the functional gradient of the return for our policy, which has a simple form and can be estimated efficiently. The policy representation naturally focuses on the relevant region of state space defined by the policy trajectories, and does not rely on a-priori defined basis points; this can be an advantage in high dimensions where suitable basis points may be difficult to define a-priori. The method is adaptive in the sense that the policy representation will naturally adapt to the complexity of the policy being modelled, which is achieved with standard efficient sparsification tools in an RKHS. We argue that finding a good kernel on states can be easier then remetrizing a high dimensional feature space. We demonstrate the approach on benchmark domains and a simulated quadrocopter navigation task.) <|cite_end|>. Such gradient estimation is not
mathematically sound in parametric TD3 and DDPG method. Respectively, since policy
distributions are considered as elements of an RKHS and RKHS itself inherits all properties of a vector
space, a dot product based similarity metric <|cite_start|> (Reference: Policy Gradient for Continuing Tasks in Non-stationary Markov Decision Processes: Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities. In this paper we consider the problem of finding optimal policies assuming that they belong to a reproducing kernel Hilbert space (RKHS). To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy. A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed. Hence preventing these algorithms to be fully implemented online, which is a desirable property for systems that need to adapt to new tasks and/or environments in deployment. The main requirement for a policy gradient algorithm to work is that the estimate of the gradient at any point in time is an ascent direction for the initial value function. In this work we establish that indeed this is the case which enables to show the convergence of the online algorithm to the critical points of the initial value function. A numerical example shows the ability of our online algorithm to learn to solve a navigation and surveillance problem, in which an agent must loop between to goal locations. This example corroborates our theoretical findings about the ascent directions of subsequent stochastic gradients. It also shows how the agent running our online algorithm succeeds in learning to navigate, following a continuing cyclic trajectory that does not comply with the standard stationarity assumptions in the literature for non episodic training.) <|cite_end|> can be
efficiently utilized to guide the policy search method. By adopting this
similarity metric between consecutive gradient updates, our agent adaptively
develop sequence of control polices in a completely non-stationary setting with
potentially bounded variance in system dynamics.
Moreover, we mention the theoretical convergence guarantee to the fixed point
solution by proving that dynamic regret term becomes a sub-martingale through
continual policy learning. For reward maximization, policy
gradients should move into an ascent
direction during the optimization process to reach global \textit{maxima}
point. Formulated on this concept, we consecutively measure the dot product
between two gradient updates. If the result of the dot product diverges from
being a positive number, the following gradients are deviating from an ascent
direction. At that instance, our agent pre-maturely aborts action-execution and
upgrades itself to a new policy. Thus, we adaptively tune a action-execution
window $H$ after which less important actions are not fully finished. Such
adjustable and frequent policy update brings about sequence of optimal control
policies and this scheme can not be done easily in deterministic policy search
method.
{\bf Contributions :}
\begin{itemize}
\item We proposed a non-parametric policy gradient framework which can
adaptively generate sequence of control policies to handle dynamically
varying real-world scenarios.
\item We introduced both theoretical guarantees and experimental proofs
that by premature action stopping and frequent policy update through a
dot-product based similarity metric in RKHS, our proposed approach can
speed up the learning process and enhance sample-efficiency.
\item We also validate that our proposed approach can exhibit better
performance than state-of-the-art parametric DRL algorithms like TD3 and DDPG
in non-stationary environments.
\end{itemize}
\begin{figure*}[htbp]
\includegraphics[width=1\textwidth]{figs/fig3_small_new.pdf}
\caption{(a) Sketch of optimal Action Selection by estimating Adaptive-H period window and sequence of policy updates. (b) Overview of Algorithmic steps for inner-product based gradient ascent direction assurance for maximizing expected rewards.}
\label{fig:overview}
\end{figure*}
Related Work
{\em Non-Stationary Reinforcement Learning}---
Very recently, a lot of attention have been drawn to address non-stationary environments through DRL. For instance, <|cite_start|> (Reference: Reinforcement learning algorithm for non-stationary environments: ) <|cite_end|> proposed online Context-Q learning, based on change point detection method <|cite_start|> (Reference: Change point detection for compositional multivariate data: ) <|cite_end|>, that stores various policy distributions from varying contexts occured in known model-change patterns.
Another work <|cite_start|> (Reference: Non-Stationary Markov Decision Processes, a Worst-Case Approach using Model-Based Reinforcement Learning, Extended version: This work tackles the problem of robust zero-shot planning in non-stationary stochastic environments. We study Markov Decision Processes (MDPs) evolving over time and consider Model-Based Reinforcement Learning algorithms in this setting. We make two hypotheses: 1) the environment evolves continuously with a bounded evolution rate; 2) a current model is known at each decision epoch but not its evolution. Our contribution can be presented in four points. 1) we define a specific class of MDPs that we call Non-Stationary MDPs (NSMDPs). We introduce the notion of regular evolution by making an hypothesis of Lipschitz-Continuity on the transition and reward functions w.r.t. time; 2) we consider a planning agent using the current model of the environment but unaware of its future evolution. This leads us to consider a worst-case method where the environment is seen as an adversarial agent; 3) following this approach, we propose the Risk-Averse Tree-Search (RATS) algorithm, a zero-shot Model-Based method similar to Minimax search; 4) we illustrate the benefits brought by RATS empirically and compare its performance with reference Model-Based algorithms.) <|cite_end|> investigated Non-Stationary MDPs (NS-MDP) by considering the non-stationary environment as an adversary to the learning agent and constructed a model based tree search algorithm close to traditional minimax search algorithm. In <|cite_start|> (Reference: Deep reinforcement learning amidst continual structured non-stationarity: As humans, our goals and our environment are persistently changing throughout our lifetime based on our experiences, actions, and internal and external drives. In contrast, typical reinforcement learning problem set-ups consider decision processes that are stationary across episodes. Can we develop reinforcement learning algorithms that can cope with the persistent change in the former, more realistic problem settings? While on-policy algorithms such as policy gradients in principle can be extended to non-stationary settings, the same cannot be said for more efficient off-policy algorithms that replay past experiences when learning. In this work, we formalize this problem setting, and draw upon ideas from the online learning and probabilistic inference literature to derive an off-policy RL algorithm that can reason about and tackle such lifelong non-stationarity. Our method leverages latent variable models to learn a representation of the environment from current and past experiences, and performs off-policy RL with this representation. We further introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.) <|cite_end|>, authors built probabilistic hierarchical model to present non-stationary Markov chain with sequence of latent variables and used variational inference to set a lower bound for log probability of evidences observed.
{\em Non-parametric Reinforcement Learning}--- RKHS based non-parametric reinforcement learning and associated rich function class has been used previously to generate stochastic policies. For instance, <|cite_start|> (Reference: {Modelling Policies in MDPs in Reproducing Kernel Hilbert Space: We consider modelling policies for MDPs in (vector-valued) reproducing kernel Hilbert function spaces (RKHS). This enables us to work “non-parametrically” in a rich function class, and provides the ability to learn complex policies. We present a framework for performing gradientbased policy optimization in the RKHS, deriving the functional gradient of the return for our policy, which has a simple form and can be estimated efficiently. The policy representation naturally focuses on the relevant region of state space defined by the policy trajectories, and does not rely on a-priori defined basis points; this can be an advantage in high dimensions where suitable basis points may be difficult to define a-priori. The method is adaptive in the sense that the policy representation will naturally adapt to the complexity of the policy being modelled, which is achieved with standard efficient sparsification tools in an RKHS. We argue that finding a good kernel on states can be easier then remetrizing a high dimensional feature space. We demonstrate the approach on benchmark domains and a simulated quadrocopter navigation task.) <|cite_end|> proposed compact representation of policy within an RKHS with no need of remetrization of abstract feature space and showed a non-parametric actor-critic framework through efficient sparsification method in RKHS.
The authors in <|cite_start|> (Reference: Stochastic Policy Gradient Ascent in Reproducing Kernel Hilbert Spaces: Reinforcement learning consists of finding policies that maximize an expected cumulative long-term reward in a Markov decision process with unknown transition probabilities and instantaneous rewards. In this paper, we consider the problem of finding such optimal policies while assuming they are continuous functions belonging to a reproducing kernel Hilbert space (RKHS). To learn the optimal policy we introduce a stochastic policy gradient ascent algorithm with three unique novel features: (i) The stochastic estimates of policy gradients are unbiased. (ii) The variance of stochastic gradients is reduced by drawing on ideas from numerical differentiation. (iii) Policy complexity is controlled using sparse RKHS representations. Novel feature (i) is instrumental in proving convergence to a stationary point of the expected cumulative reward. Novel feature (ii) facilitates reasonable convergence times. Novel feature (iii) is a necessity in practical implementations which we show can be done in a way that does not eliminate convergence guarantees. Numerical examples in standard problems illustrate successful learning of policies with low complexity representations which are close to stationary points of the expected cumulative reward.) <|cite_end|> and <|cite_start|> (Reference: Policy Gradient for Continuing Tasks in Non-stationary Markov Decision Processes: Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities. In this paper we consider the problem of finding optimal policies assuming that they belong to a reproducing kernel Hilbert space (RKHS). To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy. A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed. Hence preventing these algorithms to be fully implemented online, which is a desirable property for systems that need to adapt to new tasks and/or environments in deployment. The main requirement for a policy gradient algorithm to work is that the estimate of the gradient at any point in time is an ascent direction for the initial value function. In this work we establish that indeed this is the case which enables to show the convergence of the online algorithm to the critical points of the initial value function. A numerical example shows the ability of our online algorithm to learn to solve a navigation and surveillance problem, in which an agent must loop between to goal locations. This example corroborates our theoretical findings about the ascent directions of subsequent stochastic gradients. It also shows how the agent running our online algorithm succeeds in learning to navigate, following a continuing cyclic trajectory that does not comply with the standard stationarity assumptions in the literature for non episodic training.) <|cite_end|> showed more attention to device a stochastic policy gradient method by plugging unbiased stochastic policy gradient computed in an RKHS and build a theoretical framework of convergence to a neighborhood of critical points.
In general, recent methods focused to accurately outline control policies in a fixed parameter space while non-parametric methods tried to build a framework for policy representation in an RKHS to exploit its rich function space. In our work, we employed inner-product property of RKHS to build up an adaptive checkpoint detection method for learning optimal control policies in a dynamically changing environment. <|paper_end|> | [
"<|reference_start|> Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. <|reference_end|>",
"<|reference_start|> Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs. <|reference_end|>",
"<|reference_start|> Policy Gradient for Continuing Tasks in Non-stationary Markov Decision Processes: Reinforcement learning considers the problem of finding policies that maximize an expected cumulative reward in a Markov decision process with unknown transition probabilities. In this paper we consider the problem of finding optimal policies assuming that they belong to a reproducing kernel Hilbert space (RKHS). To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy. A major drawback of policy gradient-type algorithms is that they are limited to episodic tasks unless stationarity assumptions are imposed. Hence preventing these algorithms to be fully implemented online, which is a desirable property for systems that need to adapt to new tasks and/or environments in deployment. The main requirement for a policy gradient algorithm to work is that the estimate of the gradient at any point in time is an ascent direction for the initial value function. In this work we establish that indeed this is the case which enables to show the convergence of the online algorithm to the critical points of the initial value function. A numerical example shows the ability of our online algorithm to learn to solve a navigation and surveillance problem, in which an agent must loop between to goal locations. This example corroborates our theoretical findings about the ascent directions of subsequent stochastic gradients. It also shows how the agent running our online algorithm succeeds in learning to navigate, following a continuing cyclic trajectory that does not comply with the standard stationarity assumptions in the literature for non episodic training. <|reference_end|>",
"<|reference_start|> Deep reinforcement learning amidst continual structured non-stationarity: As humans, our goals and our environment are persistently changing throughout our lifetime based on our experiences, actions, and internal and external drives. In contrast, typical reinforcement learning problem set-ups consider decision processes that are stationary across episodes. Can we develop reinforcement learning algorithms that can cope with the persistent change in the former, more realistic problem settings? While on-policy algorithms such as policy gradients in principle can be extended to non-stationary settings, the same cannot be said for more efficient off-policy algorithms that replay past experiences when learning. In this work, we formalize this problem setting, and draw upon ideas from the online learning and probabilistic inference literature to derive an off-policy RL algorithm that can reason about and tackle such lifelong non-stationarity. Our method leverages latent variable models to learn a representation of the environment from current and past experiences, and performs off-policy RL with this representation. We further introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift. <|reference_end|>"
] | [
0,
2,
9,
20
] | {"<|cite_1|>": "arxiv-54263", "<|cite_2|>": "ss-981833", "<|cite_3|>": "arxiv-83736", "<|cite_4|>": "arxiv-149723", "<|multi_cite_5_1|>": "arxiv-83736", "<|multi_cite_5_2|>": "ss-997710", "<|cite_6|>": "ss-1671105", "<|cite_7|>": "ss-984742", "<|cite_8|>": "arxiv-296834", "<|cite_9|>": "arxiv-296834", "<|cite_10|>": "arxiv-201082", "<|cite_11|>": "ss-984742", "<|cite_12|>": "arxiv-296834", "<|cite_13|>": "ss-984742", "<|cite_14|>": "ss-1671106", "<|cite_15|>": "ss-984742", "<|cite_16|>": "arxiv-296834", "<|cite_17|>": "ss-1539240", "<|cite_18|>": "ss-850097", "<|cite_19|>": "arxiv-201082", "<|cite_20|>": "ss-1175609", "<|cite_21|>": "ss-984742", "<|cite_22|>": "arxiv-167651", "<|cite_23|>": "arxiv-296834"} |
2006.09914 | <|paper_start|> Title: Learning Partially Known Stochastic Dynamics with Empirical PAC Bayes
Abstract: Learning Partially Known Stochastic Dynamics with Empirical PAC Bayes: Neural Stochastic Differential Equations model a dynamical environment with neural nets assigned to their drift and diffusion terms. The high expressive power of their nonlinearity comes at the expense of instability in the identification of the large set of free parameters. This paper presents a recipe to improve the prediction accuracy of such models in three steps: i) accounting for epistemic uncertainty by assuming probabilistic weights, ii) incorporation of partial knowledge on the state dynamics, and iii) training the resultant hybrid model by an objective derived from a PAC-Bayesian generalization bound. We observe in our experiments that this recipe effectively translates partial and noisy prior knowledge into an improved model fit.
Introduction
\label{sec:intro}
In many engineering applications, it is often easy to model dominant characteristics of a dynamical environment by a system of differential equations with a small set of state variables. In contrast, black-box machine learning methods are often highly accurate but less interpretable. Pushing the model towards high fidelity by capturing intricate properties of the environment, however, usually requires highly flexible, e.g.\ over-parameterized models. Fitting these models to data can, in turn, result in over-fitting and hence poor generalization ability due to their high capacity.
Our work combines the benefits of both types of models by hybrid modeling: We set up the learning task as a non-linear system identification problem with partially known system characteristics. It assumes to have access to a differential equation system that describes the dynamics of the target environment with low fidelity, e.g.\ by describing the vector field on a reduced dimensionality, by ignoring detailed models of some system components, or by avoiding certain dependencies for computational feasibility. We incorporate the ODE system provided by the domain expert into a non-linear system identification engine, which we choose to be a \emph{Bayesian Neural Stochastic Differential Equation}~(BNSDE) to cover a large scope of dynamical systems, resulting in a \emph{hybrid model}.\blfootnote{${}^*$ Equal contribution.}
We propose a new algorithm for stable and effective training of such a hybrid BNSDE that combines the strengths of two statistical approaches: i) Bayesian model selection <|cite_start|> (Reference: {Gaussian processes for machine learning: Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.) <|cite_end|>, and ii) Probably Approximately Correct~(PAC) Bayesian bounds <|cite_start|> (Reference: PAC-bayesian model averaging: PAC-Bayesian learning methods combine the informative priors of Bayesian methods with distribution-free PAC guarantees. Building on earlier methods for PAC-Bayesian model selection, this paper presents a method for PACBayesian model averaging. The method constructs an optimized weighted mixture of concepts analogous to a Bayesian posterior distribution. Although the main result is stated for bounded loss, a preliminary analysis for unbounded loss is also given.) <|cite_end|> <|cite_start|> (Reference: {PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification: Approximate Bayesian Gaussian process (GP) classification techniques are powerful non-parametric learning methods, similar in appearance and performance to support vector machines. Based on simple probabilistic models, they render interpretable results and can be embedded in Bayesian frameworks for model selection, feature selection, etc. In this paper, by applying the PAC-Bayesian theorem of McAllester (1999a), we prove distribution-free generalisation error bounds for a wide range of approximate Bayesian GP classification techniques. We also provide a new and much simplified proof for this powerful theorem, making use of the concept of convex duality which is a backbone of many machine learning techniques. We instantiate and test our bounds for two particular GPC techniques, including a recent sparse method which circumvents the unfavourable scaling of standard GP algorithms. As is shown in experiments on a real-world task, the bounds can be very tight for moderate training sample sizes. To the best of our knowledge, these results provide the tightest known distribution-free error bounds for approximate Bayesian GPC methods, giving a strong learning-theoretical justification for the use of these techniques.) <|cite_end|>.
We improve the theoretical links between these two approaches <|cite_start|> (Reference: PAC-Bayesian Theory Meets Bayesian Inference: We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam's razor criteria, under the assumption that the data is generated by an i.i.d distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.) <|cite_end|> by demonstrating how they can co-operate \emph{during} training. To this end, we propose a novel training objective that suits SDE inference and derive a PAC-Bayesian generalization bound. Further, we provide a proof that this bound is upper bounded by the marginal likelihood of the BNSDE hyperparameters and a complexity penalizer. Gradients of this upper bound are {\it tied} to the actual PAC bound, hence tightening the upper bound also tightens the PAC bound. Consequently, optimizing this bound amounts to Empirical Bayes stabilized by a regularizer developed from first principles. We refer to using this objective for training as {\it Empirical PAC-Bayes}.
We demonstrate that our method can translate coarse descriptions of the actual underlying dynamics into a consistent forecasting accuracy increase. We first show the necessity of each of the multiple steps that comprise our method in an ablation study. Finally, we demonstrate in a real-world motion capture modelling task, that our method outperforms black-box system identification approaches <|cite_start|> (Reference: Neural Ordinary Differential Equations: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.) <|cite_end|> <|cite_start|> (Reference: Deep learning with differential Gaussian process flows: We propose a novel deep learning paradigm of differential flows that learn a stochastic differential equation transformations of inputs prior to a standard classification or regression function. The key property of differential Gaussian processes is the warping of inputs through infinitely deep, but infinitesimal, differential fields, that generalise discrete layers into a dynamical system. We demonstrate state-of-the-art results that exceed the performance of deep Gaussian processes and neural networks) <|cite_end|> <|cite_start|> (Reference: Differential Bayesian Neural Nets: Neural Ordinary Differential Equations (N-ODEs) are a powerful building block for learning systems, which extend residual networks to a continuous-time dynamical system. We propose a Bayesian version of N-ODEs that enables well-calibrated quantification of prediction uncertainty, while maintaining the expressive power of their deterministic counterpart. We assign Bayesian Neural Nets (BNNs) to both the drift and the diffusion terms of a Stochastic Differential Equation (SDE) that models the flow of the activation map in time. We infer the posterior on the BNN weights using a straightforward adaptation of Stochastic Gradient Langevin Dynamics (SGLD). We illustrate significantly improved stability on two synthetic time series prediction tasks and report better model fit on UCI regression benchmarks with our method when compared to its non-Bayesian counterpart.) <|cite_end|> and alternative hybridization schemes that incorporate second-order Newtonian mechanics <|cite_start|> (Reference: ODE$^2$VAE: Deep generative second order ODEs with Bayesian neural networks: We present Ordinary Differential Equation Variational Auto-Encoder (ODE$^2$VAE), a latent second order ODE model for high-dimensional sequential data. Leveraging the advances in deep generative models, ODE$^2$VAE can simultaneously learn the embedding of high dimensional trajectories and infer arbitrarily complex continuous-time latent dynamics. Our model explicitly decomposes the latent space into momentum and position components and solves a second order ODE system, which is in contrast to recurrent neural network (RNN) based time series models and recently proposed black-box ODE techniques. In order to account for uncertainty, we propose probabilistic latent ODE dynamics parameterized by deep Bayesian neural networks. We demonstrate our approach on motion capture, image rotation and bouncing balls datasets. We achieve state-of-the-art performance in long term motion prediction and imputation tasks.) <|cite_end|>.
Related Work
\label{sec:related-work}
\rdone{We will discuss all papers pointed out by the R's in two new paragraphs of Sec 5:
i) The relation between variational inference and PAC Bayes: We will cite Knoblauch et al. 2019 and relate it to the baseline Hegde et al. 2019.
ii) Black-box identification and ABC of nonlinear dynamical systems: While (Brunton, Durstewitz, Park) differ in the model class used for fitting the r.h.s. of the differential equation, they can be combined with our approach as they admit an explicitly defined transitional noise. This would result in an adapted loss function (modified likelihood and prior over parameter values). Kennedy \& O'Hagan is limited to stationary data and its extension to time series is not straightforward.:
added black blox section (relating it to hedge, mentioned knoblauch in empirical based}
\paragraph{Empirical Bayes as PAC Learning.} <|cite_start|> (Reference: PAC-Bayesian Theory Meets Bayesian Inference: We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam's razor criteria, under the assumption that the data is generated by an i.i.d distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.) <|cite_end|> propose a learnable PAC-Bayesian bound that provides generalization guarantees as a function of a marginal log-likelihood. Our method differs from this work in two main lines. First, <|cite_start|> (Reference: PAC-Bayesian Theory Meets Bayesian Inference: We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam's razor criteria, under the assumption that the data is generated by an i.i.d distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.) <|cite_end|> define risk as $-\log p({\bf Y}|{\bf H}) \in (-\infty, +\infty)$ and compensate for the unboundedness by either truncating the support of the likelihood function or introducing assumptions on the data distribution, such as sub-Gaussian or sub-Gamma. Our risk defined in~\eqref{eq:true_risk} assumes uniform boundedness, yet can be incorporated into a PAC-Bayesian bound without further restrictions. Second, <|cite_start|> (Reference: PAC-Bayesian Theory Meets Bayesian Inference: We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam's razor criteria, under the assumption that the data is generated by an i.i.d distribution. Moreover, as the negative log-likelihood is an unbounded loss function, we motivate and propose a PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that our approach is sound on classical Bayesian linear regression tasks.) <|cite_end|>'s bound is an unparameterized rescaling of the marginal log-likelihood. Hence, it is not linked to a capacity penalizer, which can be used at {\it training time} for regularization. Applying this method to hybrid sequence modelling boils down to performing plain Empirical Bayes, i.e.\ \emph{E-Bayes} in our experiments.
\paragraph{Differential GPs.} <|cite_start|> (Reference: Deep learning with differential Gaussian process flows: We propose a novel deep learning paradigm of differential flows that learn a stochastic differential equation transformations of inputs prior to a standard classification or regression function. The key property of differential Gaussian processes is the warping of inputs through infinitely deep, but infinitesimal, differential fields, that generalise discrete layers into a dynamical system. We demonstrate state-of-the-art results that exceed the performance of deep Gaussian processes and neural networks) <|cite_end|> model the dynamics of the activation maps of a {\it feed-forward} learner by the predictive distribution of a GP. This method allocates the mean of a GP as the drift and covariance as the diffusion. It infers the resultant model using variational inference. While direct application of this method to time series modeling is not straightforward, we represent it in our experiments by sticking to our generic non-linear BNSDE design in~\eqref{eq:bbsde}, and inferring it by maximizing the ELBO: $\mathcal{L}(\phi) = \mdE_{{\bf H}, \theta}\big[ \log~p({\bf Y}|\mbH)\big]-D_{KL}\big( p_{\phi}(\theta)||p(\theta)\big),$
applying the local reparameterization trick on $\theta$.
Although variational inference can be seen from a PAC-perspective by choosing the log-likelihood as the loss <|cite_start|> (Reference: Generalized variational inference: This paper introduces a generalized representation of Bayesian inference. It is derived axiomatically, recovering existing Bayesian methods as special cases. We then use it to prove that variational inference (VI) based on the Kullback-Leibler Divergence with a variational family Q produces the uniquely optimal Q-constrained approximation to the exact Bayesian inference problem. Surprisingly, this implies that standard VI dominates any other Q-constrained approximation to the exact Bayesian inference problem. This means that alternative Q-constrained approximations such as VI minimizing other divergences and Expectation Propagation can produce better posteriors than VI only by implicitly targeting more appropriate Bayesian inference problems. Inspired by this, we introduce Generalized Variational Inference (GVI), a modular approach for instead solving such alternative inference problems explicitly. We explore some applications of GVI, including robustness and better marginals. Lastly, we derive black box GVI and apply it to Bayesian Neural Networks and Deep Gaussian Processes, where GVI can comprehensively outperform competing methods.) <|cite_end|>, the ELBO does not account for the deviation of variational posterior over latent dynamics from the prior latent dynamics. We refer to this baseline in the experiments as \emph{D-BNN (VI)}. The approximate posterior design here closely follows the PR-SSM approach <|cite_start|> (Reference: {Probabilistic Recurrent State-Space Models: State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification. Deterministic versions of SSMs (e.g. LSTMs) proved extremely successful in modeling complex time series data. Fully probabilistic SSMs, however, are often found hard to train, even for smaller problems. To overcome this limitation, we propose a novel model formulation and a scalable training algorithm based on doubly stochastic variational inference and Gaussian processes. In contrast to existing work, the proposed variational approximation allows one to fully capture the latent state temporal correlations. These correlations are the key to robust training. The effectiveness of the proposed PR-SSM is evaluated on a set of real-world benchmark datasets in comparison to state-of-the-art probabilistic model learning methods. Scalability and robustness are demonstrated on a high dimensional problem.) <|cite_end|>, which represents state of the art in state-space modelling.
\paragraph{Differential BNNs with SGLD.} The learning algorithm of <|cite_start|> (Reference: Differential Bayesian Neural Nets: Neural Ordinary Differential Equations (N-ODEs) are a powerful building block for learning systems, which extend residual networks to a continuous-time dynamical system. We propose a Bayesian version of N-ODEs that enables well-calibrated quantification of prediction uncertainty, while maintaining the expressive power of their deterministic counterpart. We assign Bayesian Neural Nets (BNNs) to both the drift and the diffusion terms of a Stochastic Differential Equation (SDE) that models the flow of the activation map in time. We infer the posterior on the BNN weights using a straightforward adaptation of Stochastic Gradient Langevin Dynamics (SGLD). We illustrate significantly improved stability on two synthetic time series prediction tasks and report better model fit on UCI regression benchmarks with our method when compared to its non-Bayesian counterpart.) <|cite_end|> shares our BNSDE modeling assumptions, however, it uses Stochastic Gradient Langevin Dynamics (SGLD) to infer $\theta$. The algorithm is equivalent to performing MAP estimation of the model parameters in~\eqref{eq:bbsde} while distorting the gradient updates with decaying normal noise that also determines the learning rate.
\paragraph{Black-box identification of dynamic systems.} There are various approaches to identify a dynamical system that differ in the model class used for fitting the right-hand side of the differential equation and may also allow for transitional noise \citep[e.g.][]{brunton2016discovering,durstewitz2016state}. These approaches could be incorporated into ours, using their transition likelihood and prior over parameters. Our black-box neural SDE can be seen as one instance of such a black-box identification of dynamical systems ({\em{E-Bayes}}). As we are mainly interested in incorporating prior knowledge into such black-box models, we chose one such competitor <|cite_start|> (Reference: Deep learning with differential Gaussian process flows: We propose a novel deep learning paradigm of differential flows that learn a stochastic differential equation transformations of inputs prior to a standard classification or regression function. The key property of differential Gaussian processes is the warping of inputs through infinitely deep, but infinitesimal, differential fields, that generalise discrete layers into a dynamical system. We demonstrate state-of-the-art results that exceed the performance of deep Gaussian processes and neural networks) <|cite_end|>,
with reported results on the CMU Motion capture data set (Tab.~\ref{tab:cmu}). <|paper_end|> | [
"<|reference_start|> {PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification: Approximate Bayesian Gaussian process (GP) classification techniques are powerful non-parametric learning methods, similar in appearance and performance to support vector machines. Based on simple probabilistic models, they render interpretable results and can be embedded in Bayesian frameworks for model selection, feature selection, etc. In this paper, by applying the PAC-Bayesian theorem of McAllester (1999a), we prove distribution-free generalisation error bounds for a wide range of approximate Bayesian GP classification techniques. We also provide a new and much simplified proof for this powerful theorem, making use of the concept of convex duality which is a backbone of many machine learning techniques. We instantiate and test our bounds for two particular GPC techniques, including a recent sparse method which circumvents the unfavourable scaling of standard GP algorithms. As is shown in experiments on a real-world task, the bounds can be very tight for moderate training sample sizes. To the best of our knowledge, these results provide the tightest known distribution-free error bounds for approximate Bayesian GPC methods, giving a strong learning-theoretical justification for the use of these techniques. <|reference_end|>",
"<|reference_start|> Neural Ordinary Differential Equations: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models. <|reference_end|>",
"<|reference_start|> ODE$^2$VAE: Deep generative second order ODEs with Bayesian neural networks: We present Ordinary Differential Equation Variational Auto-Encoder (ODE$^2$VAE), a latent second order ODE model for high-dimensional sequential data. Leveraging the advances in deep generative models, ODE$^2$VAE can simultaneously learn the embedding of high dimensional trajectories and infer arbitrarily complex continuous-time latent dynamics. Our model explicitly decomposes the latent space into momentum and position components and solves a second order ODE system, which is in contrast to recurrent neural network (RNN) based time series models and recently proposed black-box ODE techniques. In order to account for uncertainty, we propose probabilistic latent ODE dynamics parameterized by deep Bayesian neural networks. We demonstrate our approach on motion capture, image rotation and bouncing balls datasets. We achieve state-of-the-art performance in long term motion prediction and imputation tasks. <|reference_end|>",
"<|reference_start|> Differential Bayesian Neural Nets: Neural Ordinary Differential Equations (N-ODEs) are a powerful building block for learning systems, which extend residual networks to a continuous-time dynamical system. We propose a Bayesian version of N-ODEs that enables well-calibrated quantification of prediction uncertainty, while maintaining the expressive power of their deterministic counterpart. We assign Bayesian Neural Nets (BNNs) to both the drift and the diffusion terms of a Stochastic Differential Equation (SDE) that models the flow of the activation map in time. We infer the posterior on the BNN weights using a straightforward adaptation of Stochastic Gradient Langevin Dynamics (SGLD). We illustrate significantly improved stability on two synthetic time series prediction tasks and report better model fit on UCI regression benchmarks with our method when compared to its non-Bayesian counterpart. <|reference_end|>"
] | [
2,
4,
7,
14
] | {"<|cite_1|>": "ss-797835", "<|multi_cite_2_1|>": "ss-1045872", "<|multi_cite_2_2|>": "ss-1323190", "<|cite_3|>": "arxiv-98815", "<|multi_cite_4_1|>": "arxiv-163082", "<|multi_cite_4_2|>": "arxiv-175656", "<|multi_cite_4_3|>": "arxiv-237298", "<|cite_5|>": "ss-1244619", "<|cite_9|>": "arxiv-98815", "<|cite_10|>": "arxiv-98815", "<|cite_11|>": "arxiv-98815", "<|cite_12|>": "arxiv-175656", "<|cite_6|>": "ss-1277374", "<|cite_7|>": "ss-1461502", "<|cite_13|>": "arxiv-237298", "<|cite_8|>": "arxiv-175656"} |
1204.4166 | <|paper_start|> Title: Message passing with relaxed moment matching
Abstract: Message passing with relaxed moment matching: Bayesian learning is often hampered by large computational expense. As a powerful generalization of popular belief propagation, expectation propagation (EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP can be sensitive to outliers and suffer from divergence for difficult cases. To address this issue, we propose a new approximate inference approach, relaxed expectation propagation (REP). It relaxes the moment matching requirement of expectation propagation by adding a relaxation factor into the KL minimization. We penalize this relaxation with a $l_1$ penalty. As a result, when two distributions in the relaxed KL divergence are similar, the relaxation factor will be penalized to zero and, therefore, we obtain the original moment matching; In the presence of outliers, these two distributions are significantly different and the relaxation factor will be used to reduce the contribution of the outlier. Based on this penalized KL minimization, REP is robust to outliers and can greatly improve the posterior approximation quality over EP. To examine the effectiveness of REP, we apply it to Gaussian process classification, a task known to be suitable to EP. Our classification results on synthetic and UCI benchmark datasets demonstrate significant improvement of REP over EP and Power EP--in terms of algorithmic stability, estimation accuracy and predictive performance.
Introduction
Bayesian learning provides a principled framework for modeling complex systems and making predictions.
A critical component of Bayesian learning is the computation of posterior distributions that represent estimation uncertainty. However, the exact computation is often so expensive that it has become a bottleneck for practical applications of Bayesian learning. To address this challenge, a variety of approximate inference methods has been developed to speed up the computation <|cite_start|> (Reference: Tutorial on variational approximation methods: This chapter contains sections titled: Introduction, Examples of variational methods, A brief introduction to graphical models, Variational mean field method, Structured variational approach, Local variational approach, Parameter estimation with variational methods, Variational Bayesian methods, Discussion, References) <|cite_end|> <|cite_start|> (Reference: {A Family of Algorithms for Approximate Bayesian Inference: One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, “Expectation Propagation,” unifies and generalizes two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation.
Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction—propagating richer belief states which incorporate correlations between variables.
This framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)) <|cite_end|> <|cite_start|> (Reference: Expectation Consistent Approximate Inference: We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood as replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability distributions which are made consistent on a set of moments and encode different features of the original intractable distribution. In this way we are able to use Gaussian approximations for models with discrete or bounded variables which allow us to include non-trivial correlations. These are neglected in many other methods. We test the framework on toy benchmark problems for binary variables on fully connected graphs and 2D grids and compare with other methods, such as loopy belief propagation. Good performance is already achieved by using single nodes as tractable substructures. Significant improvements are obtained when a spanning tree is used instead.) <|cite_end|> <|cite_start|> (Reference: Graphical models, Exponential families, and variational inference: The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.) <|cite_end|>. As a representative approximate inference method, expectation propagation <|cite_start|> (Reference: {A Family of Algorithms for Approximate Bayesian Inference: One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, “Expectation Propagation,” unifies and generalizes two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation.
Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction—propagating richer belief states which incorporate correlations between variables.
This framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)) <|cite_end|> generalizes the popular belief propagation algorithm, allows us to use structured approximations
and handles both discrete and continuous posterior distributions. EP has been shown to significantly reduce computational cost while maintaining high approximation accuracy; for example, <|cite_start|> (Reference: Assessing approximate inference for binary Gaussian process classification: Gaussian process priors can be used to define flexible, probabilistic classification models. Unfortunately exact Bayesian inference is analytically intractable and various approximation techniques have been proposed. In this work we review and compare Laplace's method and Expectation Propagation for approximate Bayesian inference in the binary Gaussian process classification model. We present a comprehensive comparison of the approximations, their predictive performance and marginal likelihood estimates to results obtained by MCMC sampling. We explain theoretically and corroborate empirically the advantages of Expectation Propagation compared to Laplace's method.) <|cite_end|> have demonstrated that, for Gaussian process (GP) classification, EP can provide accurate approximation to predictive posteriors.
Despite its success in many applications, EP can be sensitive to outliers in observation and suffer from divergence when the exact distribution is not close to the approximating family used by EP.
This stems from the fact that EP approximates each factor in the model by a simpler form, known as messages, and iteratively refines the messages (See Section 2). Each message refinement is based on moment matching, which minimizes the Kullback-Leibler (KL) divergence between old and new beliefs.
The messages are refined in a distributed fashion---resulting in efficient inference on a graphical model.
But when the approximating family cannot fit the exact posterior well---such as
in the presence of outliers---the message passing algorithm can suffer from divergence and give poor approximation quality.
We can force EP to converge by using the CCCP algorithm <|cite_start|> (Reference: Cccp algorithms to minimize the bethe and kikuchi free energies: Convergent alternatives to belief propagation: This article introduces a class of discrete iterative algorithms that are provably convergent alternatives to belief propagation (BP) and generalized belief propagation (GBP). Our work builds on recent results by Yedidia, Freeman, and Weiss (2000), who showed that the fixed points of BP and GBP algorithms correspond to extrema of the Bethe and Kikuchi free energies, respectively. We obtain two algorithms by applying CCCP to the Bethe and Kikuchi free energies, respectively (CCCP is a procedure, introduced here, for obtaining discrete iterative algorithms by decomposing a cost function into a concave and a convex part). We implement our CCCP algorithms on two- and three-dimensional spin glasses and compare their results to BP and GBP. Our simulations show that the CCCP algorithms are stable and converge very quickly (the speed of CCCP is similar to that of BP and GBP). Unlike CCCP, BP will often not converge for these problems (GBP usually, but not always, converges). The results found by CCCP applied to the Bethe or Kikuchi free energies are equivalent, or slightly better than, those found by BP or GBP, respectively (when BP and GBP converge). Note that for these, and other problems, BP and GBP give very accurate results (see Yedidia et al., 2000), and failure to converge is their major error mode. Finally, we point out that our algorithms have a large range of inference and learning applications.) <|cite_end|> <|cite_start|> (Reference: {Approximate inference techniques with expectation constraints: This paper discusses inference problems in probabilistic graphical models that often occur in a machine learning setting. In particular it presents a unified view of several recently proposed approximation schemes. Expectation consistent approximations and expectation propagation are both shown to be related to Bethe free energies with weak consistency constraints, i.e. free energies where local approximations are only required to agree on certain statistics instead of full marginals.) <|cite_end|>. But it is slower than the message passing updates.
Also, according to <|cite_start|> (Reference: {A Family of Algorithms for Approximate Bayesian Inference: One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, “Expectation Propagation,” unifies and generalizes two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation.
Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction—propagating richer belief states which incorporate correlations between variables.
This framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)) <|cite_end|>, EP diverges for a good reason---indicating a poor approximating family or a poor energy function used by EP.
To address this issue, we propose a new approximate inference algorithm, Relaxed Expectation Propagation (REP). In REP, we introduce a relaxation factor $r$ in the KL minimization used by EP (See Section 3) and penalize this relaxation factor. Because of this penalization, when the factor involved in the KL minimization is close to the current approximation, REP reduces to EP; when the factor is an outlier, the relaxation is used to stabilizing the message passing by relaxing the moment matching constraint. Regardless of the amount of outliers in data, REP converges in all of our experiments. To better understand REP, we also present the primal energy functions in Section~\ref{REP}. It differs from the EP energy function or the equivalent Bethe-like energy function <|cite_start|> (Reference: {Approximate inference techniques with expectation constraints: This paper discusses inference problems in probabilistic graphical models that often occur in a machine learning setting. In particular it presents a unified view of several recently proposed approximation schemes. Expectation consistent approximations and expectation propagation are both shown to be related to Bethe free energies with weak consistency constraints, i.e. free energies where local approximations are only required to agree on certain statistics instead of full marginals.) <|cite_end|> by the use of relaxation factors.
To examine the performance of REP, in Section 5, we use it to train Gaussian process classification models for which EP is known to be a good choice for approximate inference <|cite_start|> (Reference: Assessing approximate inference for binary Gaussian process classification: Gaussian process priors can be used to define flexible, probabilistic classification models. Unfortunately exact Bayesian inference is analytically intractable and various approximation techniques have been proposed. In this work we review and compare Laplace's method and Expectation Propagation for approximate Bayesian inference in the binary Gaussian process classification model. We present a comprehensive comparison of the approximations, their predictive performance and marginal likelihood estimates to results obtained by MCMC sampling. We explain theoretically and corroborate empirically the advantages of Expectation Propagation compared to Laplace's method.) <|cite_end|>.
In Section 7, we report experimental results on synthetic and UCI benchmark datasets, demonstrating that REP consistently outperforms EP and Power EP---in terms of algorithmic stability, estimation accuracy, and predictive performance.
\vspace{-0.15in} <|paper_end|> | [
"<|reference_start|> {A Family of Algorithms for Approximate Bayesian Inference: One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, “Expectation Propagation,” unifies and generalizes two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation. \nLoopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction—propagating richer belief states which incorporate correlations between variables. \nThis framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) <|reference_end|>",
"<|reference_start|> Graphical models, Exponential families, and variational inference: The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models. <|reference_end|>",
"<|reference_start|> {A Family of Algorithms for Approximate Bayesian Inference: One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, “Expectation Propagation,” unifies and generalizes two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation. \nLoopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction—propagating richer belief states which incorporate correlations between variables. \nThis framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) <|reference_end|>",
"<|reference_start|> {Approximate inference techniques with expectation constraints: This paper discusses inference problems in probabilistic graphical models that often occur in a machine learning setting. In particular it presents a unified view of several recently proposed approximation schemes. Expectation consistent approximations and expectation propagation are both shown to be related to Bethe free energies with weak consistency constraints, i.e. free energies where local approximations are only required to agree on certain statistics instead of full marginals. <|reference_end|>"
] | [
1,
3,
8,
9
] | {"<|multi_cite_1_1|>": "ss-2012376", "<|multi_cite_1_2|>": "ss-1058175", "<|multi_cite_1_3|>": "ss-992960", "<|multi_cite_1_4|>": "ss-1035235", "<|cite_2|>": "ss-1058175", "<|cite_6|>": "ss-1079922", "<|multi_cite_3_1|>": "ss-1008265", "<|multi_cite_3_2|>": "ss-1512007", "<|cite_7|>": "ss-1058175", "<|cite_4|>": "ss-1512007", "<|cite_5|>": "ss-1079922"} |
2002.11573 | <|paper_start|> Title: Efficient reinforcement learning control for continuum robots based on Inexplicit Prior Knowledge
Abstract: Efficient reinforcement learning control for continuum robots based on Inexplicit Prior Knowledge: Compared to rigid robots that are generally studied in reinforcement learning, the physical characteristics of some sophisticated robots such as soft or continuum robots are higher complicated. Moreover, recent reinforcement learning methods are data-inefficient and can not be directly deployed to the robot without simulation. In this paper, we propose an efficient reinforcement learning method based on inexplicit prior knowledge in response to such problems. We first corroborate the method by simulation and employed directly in the real world. By using our method, we can achieve active visual tracking and distance maintenance of a tendon-driven robot which will be critical in minimally invasive procedures. Codes are available at https://github.com/Skylark0924/TendonTrack.
Introduction
For decades, researchers have made massive efforts to make machines intelligent, in expectation of relieving human labors from repetitive, dangerous, and heavy work.
In traditional robotics, control of robots is realized by establishing kinematic and dynamic models in the form of a transformation matrix. This method has achieved excellent results in conventional robots with discrete rigid links but becomes difficult to implement when dealing with soft robots such as continuum robots. In the traditional method, several subjective assumptions have to be made to get control of continuum manipulators, leading to a deviation with actual circumstances and inaccurate in results <|cite_start|> (Reference: Kinematics for multisection continuum robots: We introduce a new method for synthesizing kinematic relationships for a general class of continuous backbone, or continuum , robots. The resulting kinematics enable real-time task and shape control by relating workspace (Cartesian) coordinates to actuator inputs, such as tendon lengths or pneumatic pressures, via robot shape coordinates. This novel approach, which carefully considers physical manipulator constraints, avoids artifacts of simplifying assumptions associated with previous approaches, such as the need to fit the resulting solutions to the physical robot. It is applicable to a wide class of existing continuum robots and models extension, as well as bending, of individual sections. In addition, this approach produces correct results for orientation, in contrast to some previously published approaches. Results of real-time implementations on two types of spatial multisection continuum manipulators are reported.) <|cite_end|>. Even though, the kinematic and dynamic models for continuum robots are often described in the form of nonlinear partial differential equations, which makes the control more complex.
Ever since reinforcement learning (RL) theory was proposed, developers have been trying to apply it to robotics. With introducing RL methods, it enhances the traditional method in rigid robotics with trial-and-error <|cite_start|> (Reference: QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation: In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96% grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.) <|cite_end|> <|cite_start|> (Reference: Sim-to-Real Transfer of Robotic Control with Dynamics Randomization: Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this "reality gap". By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.) <|cite_end|>. But the application of RL theory in continuum robots could still meet some resistance.
As far as we were concerned, recently only a few studies have applied RL to control continuum robots. In Thuruthe et al.’s research <|cite_start|> (Reference: {Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators: Dynamic control of soft robotic manipulators is an open problem yet to be well explored and analyzed. Most of the current applications of soft robotic manipulators utilize static or quasi-dynamic controllers based on kinematic models or linearity in the joint space. However, such approaches are not truly exploiting the rich dynamics of a soft-bodied system. In this paper, we present a model-based policy learning algorithm for closed-loop predictive control of a soft robotic manipulator. The forward dynamic model is represented using a recurrent neural network. The closed-loop policy is derived using trajectory optimization and supervised learning. The approach is verified first on a simulated piecewise constant strain model of a cable driven under-actuated soft manipulator. Furthermore, we experimentally demonstrate on a soft pneumatically actuated manipulator how closed-loop control policies can be derived that can accommodate variable frequency control and unmodeled external loads.) <|cite_end|>, an accurate Vicon tracking system is provided for realizing closed-loop control from the third-person perspective. However, devices used in their research are not available for most application scenarios of continuum robots. Furthermore, data-inefficiency is the major drawback of RL algorithms, especially in a non-stationary continuum robot, which can make the learning on the real-world robot more impractical.
In this paper, we focalize automatic kinematics learning of complex robotic systems and end-to-end predicting control by using a visual servo from a first-person perspective. The primary problem we tackled is the data-efficiency of complex and non-stationary real-world robotics. We use the inexplicit prior knowledge to speed up the convergence of the learning process. Meanwhile, the ability of exploration is still guaranteed by an auto-adjusted exploitation coefficient.
To evaluate our proposed method empirically, we build a simulator by \textit{MuJoCo} <|cite_start|> (Reference: {MuJoCo: A Physics Engine for Model-Based Control: We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.) <|cite_end|> first and then try on a real-world continuum robot directly.
Our primary contributions are as follows:
\begin{itemize}
\item An efficient model-based RL framework for robotics that integrates inexplicit prior knowledge (IPK) is proposed. It guides the exploration followed the constrain of priors;
\item A Kalman filter based fusion controller fuses action distributions from priors and RL to achieve a safe exploration;
\item To balance the performance of priors and RL, we set an exploitation coefficient $
Related Work
\subsection{Model-based Reinforcement Learning}
The word \textit{model-based} is easily ambiguous, which can both represent a given model in MPC and a learned model mainly used in RL. In this paper, model-based means a model learned from the trajectory data when either the system dynamic model or the environment model is unknown.
Model-based reinforcement learning (MBRL) began with Dyna <|cite_start|> (Reference: {Dyna, an integrated architecture for learning, planning, and reacting: Dyna is an AI architecture that integrates learning, planning, and reactive execution. Learning methods are used in Dyna both for compiling planning results and for updating a model of the effects ...) <|cite_end|> architecture. Compared to model-free reinforcement learning (MFRL), it is undoubtedly more suitable for robotic systems because of the data-efficiency of taking full advantage of experience data. Since MBRL uses a learned dynamic model to promote the learning process, its uncertainty will bring incorrect transition and impair value function approximation <|cite_start|> (Reference: Uncertainty-driven imagination for continuous deep reinforcement learning: Continuous control of high-dimensional systems can be achieved by current state-of-the-art reinforcement learning methods such as the Deep Deterministic Policy Gradient algorithm, but needs a significant amount of data samples. For real-world systems, this can be an obstacle since excessive data collection can be expensive, tedious or lead to physical damage. The main incentive of this work is to keep the advantages of model-free Q-learning while minimizing real-world interaction by the employment of a dynamics model learned in parallel. To counteract adverse effects of imaginary rollouts with an inaccurate model, a notion of uncertainty is introduced, to make use of artificial data only in cases of high uncertainty. We evaluate our approach on three simulated robot tasks and achieve faster learning by at least 40 per cent in comparison to vanilla DDPG with multiple updates.) <|cite_end|>. MVE <|cite_start|> (Reference: Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning: Recent model-free reinforcement learning algorithms have proposed incorporating learned dynamics models as a source of additional data with the intention of reducing sample complexity. Such methods hold the promise of incorporating imagined data coupled with a notion of model uncertainty to accelerate the learning of continuous control tasks. Unfortunately, they rely on heuristics that limit usage of the dynamics model. We present model-based value expansion, which controls for uncertainty in the model by only allowing imagination to fixed depth. By enabling wider use of learned dynamics models within a model-free reinforcement learning algorithm, we improve value estimation, which, in turn, reduces the sample complexity of learning.) <|cite_end|> controls the uncertainty of the model by limiting the imagination of the model to a fixed depth. STEVE <|cite_start|> (Reference: Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion: Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity. However, this is difficult because an imperfect dynamics model can degrade the performance of the learning algorithm, and in sufficiently complex environments, the dynamics model will almost always be imperfect. As a result, a key challenge is to combine model-based approaches with model-free learning in such a way that errors in the model do not degrade performance. We propose stochastic ensemble value expansion (STEVE), a novel model-based technique that addresses this issue. By dynamically interpolating between model rollouts of various horizon lengths for each individual example, STEVE ensures that the model is only utilized when doing so does not introduce significant errors. Our approach outperforms model-free baselines on challenging continuous control benchmarks with an order-of-magnitude increase in sample efficiency, and in contrast to previous model-based approaches, performance does not degrade in complex environments.) <|cite_end|> improves the thought of MVE by dynamically interpolating between model rollouts of a different horizon length of each example and ensures that models are used without redundant errors.
Furthermore, probabilistic models that usually relay on Bayesian methods are more suitable for robotics issue since they can combine the uncertainty into model building <|cite_start|> (Reference: A survey on policy search algorithms for learning robot controllers in a handful of trials: Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word "big-data", we refer to this challenge as "micro-data reinforcement learning". We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time.) <|cite_end|>. Black-DROPS <|cite_start|> (Reference: Black-Box Data-efficient Policy Search for Robotics: The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot).) <|cite_end|> and PILCO <|cite_start|> (Reference: PILCO: A Model-based and Data-Efficient Approach to Policy Search: In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks.) <|cite_end|> both utilize Gaussian Processes (GPs) to reduce the interaction time and solve several robotics tasks. On the basis of GPs, Bayesian
NN (BNN) <|cite_start|> (Reference: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning: Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.) <|cite_end|> has been used in some work to improve the scaling of MBRL algorithms <|cite_start|> (Reference: Improving PILCO with bayesian neural network dynamics models: Model-based reinforcement learning (RL) allows an agent to discover good policies with a small number of trials by generalising observed transitions. Data efficiency can be further improved with a probabilistic model of the agent’s ignorance about the world, allowing it to choose actions under uncertainty. Bayesian modelling offers tools for this task, with PILCO [1] being a prominent example, achieving state-of-theart data efficiency on low dimensional RL benchmarks. But PILCO relies on Gaussian processes (GPs), which prohibits its applicability to problems that require a larger number of trials to be solved. Further, PILCO does not consider temporal correlation in model uncertainty between successive state transitions, which results in PILCO underestimating state uncertainty at future time steps [2]. In this paper we extend PILCO’s framework to use Bayesian deep dynamics models with approximate variational inference, allowing PILCO to scale linearly with number of trials and observation space dimensionality. Using particle methods we sample dynamics function realisations, and obtain lower cumulative cost than PILCO. We give insights into the modelling assumptions made in PILCO, and show that moment matching is a crucial simplifying assumption made by the model. Our implementation can leverage GPU architectures, offering faster running time than PILCO, and will allow structured observation spaces to be modelled (images or higher dimensional inputs) in the future.) <|cite_end|>. Chua et al. <|cite_start|> (Reference: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models: Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertainty-aware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertainty-aware deep network dynamics models with sampling-based uncertainty propagation. Our comparison to state-of-the-art model-based and model-free deep RL algorithms shows that our approach matches the asymptotic performance of model-free algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g., 8 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the half-cheetah task).) <|cite_end|> propose a combination of ensembles and BNN recently for learning probabilistic dynamics models of higher dimensional systems.
Most recently, Michael et al. <|cite_start|> (Reference: When to Trust Your Model: Model-Based Policy Optimization: Designing effective model-based reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of model-generated data. In this paper, we study the role of model usage in policy optimization both theoretically and empirically. We first formulate and analyze a model-based reinforcement learning algorithm with a guarantee of monotonic improvement at each step. In practice, this analysis is overly pessimistic and suggests that real off-policy data is always preferable to model-generated on-policy data, but we show that an empirical estimate of model generalization can be incorporated into such analysis to justify model usage. Motivated by this analysis, we then demonstrate that a simple procedure of using short model-generated rollouts branched from real data has the benefits of more complicated model-based algorithms without the usual pitfalls. In particular, this approach surpasses the sample efficiency of prior model-based methods, matches the asymptotic performance of the best model-free algorithms, and scales to horizons that cause other model-based methods to fail entirely.) <|cite_end|> propose a monotonic model-based policy optimization (MBPO) algorithm. MBPO combines the benefits of adaptive length planning and ensemble BNN models to provide a performance guarantee and get a state-of-the-art efficient performance on several common RL tasks.
\subsection{Reinforcement Learning with prior knowledge}
Although MBRL algorithms achieve infusive success, they still take too many time steps (e.g., the state-of-the-art MBRL method MBPO still needs 5k steps even for a simple Pendulum task) which still impractical in real-world robot application.
Recently, a comprehensive survey on policy search algorithms for learning robot controllers in a handful of trials is worth reading <|cite_start|> (Reference: A survey on policy search algorithms for learning robot controllers in a handful of trials: Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word "big-data", we refer to this challenge as "micro-data reinforcement learning". We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time.) <|cite_end|>. Except for creating data-driven surrogate models as MBRL algorithms do, the article states that there is another way to let robots adapt with \textit{micro-data}: leverage prior knowledge on the policy parameters <|cite_start|> (Reference: An Algorithmic Perspective on Imitation Learning: As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning.) <|cite_end|>, on the expected return <|cite_start|> (Reference: Robots that can adapt like animals: As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury.) <|cite_end|>, or on the dynamic models <|cite_start|> (Reference: Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics: The most data-efficient algorithms for reinforcement learning in robotics are model-based policy search algorithms, which alternate between learning a dynamical model of the robot and optimizing a policy to maximize the expected return given the model and its uncertainties. Among the few proposed approaches, the recently introduced Black-DROPS algorithm exploits a black-box optimization algorithm to achieve both high data-efficiency and good computation times when several cores are used; nevertheless, like all model-based policy search approaches, Black-DROPS does not scale to high dimensional state/action spaces. In this paper, we introduce a new model learning procedure in Black-DROPS that leverages parameterized black-box priors to (1) scale up to high-dimensional systems, and (2) be robust to large inaccuracies of the prior information. We demonstrate the effectiveness of our approach with the "pendubot" swing-up task in simulation and with a physical hexapod robot (48D state space, 18D action space) that has to walk forward as fast as possible. The results show that our new algorithm is more data-efficient than previous model-based policy search algorithms (with and without priors) and that it can allow a physical 6-legged robot to learn new gaits in only 16 to 30 seconds of interaction time.) <|cite_end|> <|cite_start|> (Reference: Efficient Reinforcement Learning for Robots using Informative Simulated Priors: Autonomous learning through interaction with the physical world is a promising approach to designing controllers and decision-making policies for robots. Unfortunately, learning on robots is often difficult due to the large number of samples needed for many learning algorithms. Simulators are one way to decrease the samples needed from the robot by incorporating prior knowledge of the dynamics into the learning algorithm. In this paper we present a novel method for transferring data from a simulator to a robot, using simulated data as a prior for real-world learning. A Bayesian nonparametric prior is learned from a potentially black-box simulator. The mean of this function is used as a prior for the Probabilistic Inference for Learning Control (PILCO) algorithm. The simulated prior improves the convergence rate and performance of PILCO by directing the policy search in areas of the state-space that have not yet been observed by the robot. Simulated and hardware results show the benefits of using the prior knowledge in the learning framework.) <|cite_end|>. We can bring some prior knowledge of the robot system in for both stable and efficient, rather than merely learning from scratch.
Similarly, as flourishing fields, Imitation Learning (IL) and Reinforcement Learning from Demonstrations (RLfD) also use expert demonstrations as a prior for accelerating the training process. They integrate expert data by behaviour cloning <|cite_start|> (Reference: Learning From Demonstration: By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and/or demonstrations of other humans. For teaming control, this paper investigates how learning from demonstration can be applied in the context of reinforcement learning. We consider priming the Q-function, the value function, the policy, and the model of the task dynamics as possible areas where demonstrations can speed up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing on a complex anthropomorphic robot arm, we demonstrate that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems. Using the suggested methods, the robot learns pole balancing in just a single trial after a 30 second long demonstration of the human instructor.) <|cite_end|> <|cite_start|> (Reference: Apprenticeship {{Learning}} via {{Inverse Reinforcement Learning}}: We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using "inverse reinforcement learning" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.) <|cite_end|> <|cite_start|> (Reference: Task Transfer by Preference-Based Cost Learning: The goal of task transfer in reinforcement learning is migrating the action policy of an agent to the target task from the source task. Given their successes on robotic action planning, current methods mostly rely on two requirements: exactly-relevant expert demonstrations or the explicitly-coded cost function on target task, both of which, however, are inconvenient to obtain in practice. In this paper, we relax these two strong conditions by developing a novel task transfer framework where the expert preference is applied as a guidance. In particular, we alternate the following two steps: Firstly, letting experts apply pre-defined preference rules to select related expert demonstrates for the target task. Secondly, based on the selection result, we learn the target cost function and trajectory distribution simultaneously via enhanced Adversarial MaxEnt IRL and generate more trajectories by the learned target distribution for the next preference selection. The theoretical analysis on the distribution learning and convergence of the proposed algorithm are provided. Extensive simulations on several benchmarks have been conducted for further verifying the effectiveness of the proposed method.) <|cite_end|>, data augmenting <|cite_start|> (Reference: A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning: Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.) <|cite_end|>, or setting as a policy penalty <|cite_start|> (Reference: Reinforcement Learning from Demonstration Through Shaping: Reinforcement learning describes how a learning agent can achieve optimal behaviour based on interactions with its environment and reward feedback. A limiting factor in reinforcement learning as employed in artificial intelligence is the need for an often prohibitively large number of environment samples before the agent reaches a desirable level of performance. Learning from demonstration is an approach that provides the agent with demonstrations by a supposed expert, from which it should derive suitable behaviour. Yet, one of the challenges of learning from demonstration is that no guarantees can be provided for the quality of the demonstrations, and thus the learned behavior. In this paper, we investigate the intersection of these two approaches, leveraging the theoretical guarantees provided by reinforcement learning, and using expert demonstrations to speed up this learning by biasing exploration through a process called reward shaping. This approach allows us to leverage human input without making an erroneous assumption regarding demonstration optimality. We show experimentally that this approach requires significantly fewer demonstrations, is more robust against suboptimality of demonstrations, and achieves much faster learning than the recently developed HAT algorithm.) <|cite_end|> <|cite_start|> (Reference: Policy Optimization with Demonstrations: Exploration remains a significant challenge to re-inforcement learning methods, especially in environments where reward signals are sparse. Recent methods of learning from demonstrations have shown to be promising in overcoming exploration difficulties but typically require considerable high-quality demonstrations that are difficult to collect. We propose to effectively leverage available demonstrations to guide exploration through enforcing occupancy measure matching between the learned policy and current demonstrations, and develop a novel Policy Optimization from Demonstration (POfD) method. We show that POfD induces implicit dynamic reward shaping and brings provable benefits for policy improvement. Furthermore, it can be combined with policy gradient methods to produce state-of-the-art results, as demonstrated experimentally on a range of popular benchmark sparse-reward tasks, even when the demonstrations are few and imperfect.) <|cite_end|> or a constraint <|cite_start|> (Reference: Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance: In this paper, we study Reinforcement Learning from Demonstrations (RLfD) that improves the exploration efficiency of Reinforcement Learning (RL) by providing expert demonstrations. Most of existing RLfD methods require demonstrations to be perfect and sufficient, which yet is unrealistic to meet in practice. To work on imperfect demonstrations, we first define an imperfect expert setting for RLfD in a formal way, and then point out that previous methods suffer from two issues in terms of optimality and convergence, respectively. Upon the theoretical findings we have derived, we tackle these two issues by regarding the expert guidance as a soft constraint on regulating the policy exploration of the agent, which eventually leads to a constrained optimization problem. We further demonstrate that such problem is able to be addressed efficiently by performing a local linear search on its dual form. Considerable empirical evaluations on a comprehensive collection of benchmarks indicate our method attains consistent improvement over other RLfD counterparts.) <|cite_end|>.
In addition, priors can use in a stronger way for some tasks. Moreno et al. <|cite_start|> (Reference: Using prior knowledge to improve reinforcement learning in mobile robotics: Reinforcement learning (RL) is thought to be an appropriate paradigm for acquiring control policies in mobile robotics. However, in its standard formulation (tabula rasa) RL must explore and learn everything from scratch, which is neither realistic nor effective in real-world tasks. In this article we propose a new strategy, called Supervised Reinforcement Learning (SRL), for taking advantage of external knowledge within this type of learning and validate it in a wall-following behaviour.) <|cite_end|> add a set of prior knowledge sources as a basic controller and use a credit assignment block to judge when to explore by RL. However, the evaluation function is designed by hand and just acts as a conditional judgment.
\subsection{Continuum Robot Control}
Continuum robots have many usages in flexible scenes, especially in interventional medicine field, because they theoretically have infinite Degrees of Freedom (DOF) <|cite_start|> (Reference: Continuum Robots for Medical Applications: A Survey: In this paper, we describe the state of the art in continuum robot manipulators and systems intended for application to interventional medicine. Inspired by biological trunks, tentacles, and snakes, continuum robot designs can traverse confined spaces, manipulate objects in complex environments, and conform to curvilinear paths in space. In addition, many designs offer inherent structural compliance and ease of miniaturization. After decades of pioneering research, a host of designs have now been investigated and have demonstrated capabilities beyond the scope of conventional rigid-link robots. Recently, we have seen increasing efforts aimed at leveraging these qualities to improve the frontiers of minimally invasive surgical interventions. Several concepts have now been commercialized, which are inspiring and enabling a current paradigm shift in surgical approaches toward flexible access routes, e.g., through natural orifices such as the nose. In this paper, we provide an overview of the current state of this field from the perspectives of both robotics science and medical applications. We discuss relevant research in design, modeling, control, and sensing for continuum manipulators, and we highlight how this work is being used to build robotic systems for specific surgical procedures. We provide perspective for the future by discussing current limitations, open questions, and challenges.) <|cite_end|>.
Studies on control of continuum robots have been widely explored in traditional methods <|cite_start|> (Reference: Control Strategies for Soft Robotic Manipulators: A Survey: With the rise of soft robotics technology and applications, there have been increasing interests in the development of controllers appropriate for their particular design. Being fundamentally different from traditional rigid robots, there is still not a unified framework for the design, analysis, and control of these high-dimensional robots. This review article attempts to provide an insight into various controllers developed for continuum/soft robots as a guideline for future applications in the soft robotics field. A comprehensive assessment of various control strategies and an insight into the future areas of research in this field are presented.) <|cite_end|>. Researchers tend to establish the manipulator kinematic and dynamic models derived from several geometric assumptions. The most commonly used model simplifies the control issues based on the constant curvature (CC) approximation and linearized feedback <|cite_start|> (Reference: Kinematics for multisection continuum robots: We introduce a new method for synthesizing kinematic relationships for a general class of continuous backbone, or continuum , robots. The resulting kinematics enable real-time task and shape control by relating workspace (Cartesian) coordinates to actuator inputs, such as tendon lengths or pneumatic pressures, via robot shape coordinates. This novel approach, which carefully considers physical manipulator constraints, avoids artifacts of simplifying assumptions associated with previous approaches, such as the need to fit the resulting solutions to the physical robot. It is applicable to a wide class of existing continuum robots and models extension, as well as bending, of individual sections. In addition, this approach produces correct results for orientation, in contrast to some previously published approaches. Results of real-time implementations on two types of spatial multisection continuum manipulators are reported.) <|cite_end|> <|cite_start|> (Reference: Kinematics and the implementation of an elephant's trunk manipulator and other continuum style robots: Traditionally, robot manipulators have been a simple arrangement of a small number of serially connected links and actuated joints. Though these manipulators prove to be very effective for many tasks, they are not without their limitations, due mainly to their lack of maneuverability or total degrees of freedom. Continuum style (i.e., continuous "back-bone") robots, on the other hand, exhibit a wide range of maneuverability, and can have a large number of degrees of freedom. The motion of continuum style robots is generated through the bending of the robot over a given section; unlike traditional robots where the motion occurs in discrete locations, i.e., joints. The motion of continuum manipulators is often compared to that of biological manipulators such as trunks and tentacles. These continuum style robots can achieve motions that could only be obtainable by a conventionally designed robot with many more degrees of freedom. In this paper we present a detailed formulation and explanation of a novel kinematic model for continuum style robots. The design, construction, and implementation of our continuum style robot called the elephant trunk manipulator is presented. Experimental results are then provided to verify the legitimacy of our model when applied to our physical manipulator. We also provide a set of obstacle avoidance experiments that help to exhibit the practical implementation of both our manipulator and our kinematic model.) <|cite_end|>. This CC model performs worse when external loads are non-negligible <|cite_start|> (Reference: Large deflection dynamics and control for planar continuum robots: This paper focuses on a class of robot manipulators termed "continuum" robots - robots that exhibit behavior similar to tentacles, trunks, and snakes. In previous work, we studied details of the mechanical design, kinematics, path-planning and small-deflection dynamics for continuum robots such as the Clemson "tentacle manipulator". In this paper, we discuss the dynamics of a planar continuum backbone section, incorporating a large-deflection dynamic model. Based on these dynamics, we formulate a vibration-damping setpoint controller, and include experimental results to illustrate the efficacy of the proposed controller.) <|cite_end|> <|cite_start|> (Reference: Geometrically exact dynamic models for soft robotic manipulators: Unlike traditional rigid-linked robots, soft robotic manipulators can bend into a wide variety of complex shapes due to control inputs and gravitational loading. This paper presents a new approach for modeling the dynamics of soft robotic manipulators that incorporates the effect of material nonlinearities and distributed and payload weight and is geometrically exact for the large curvature, shear, torsion and extension that often occur in these manipulators. The model is based on the general Cosserat theory of rods and a fiber reinforced model of air muscle actuators. The model is validated experimentally on the OctArm V manipulator, showing less that 5% average error for a wide range of actuation pressures and base orientations as compared to almost 50% average error for the constant curvature model previously used by researchers.) <|cite_end|>. As an alternative, mechanics-modified models were used in continuum robotics. Walker, Hannan and Gravagne have introduced the hyper-redundant robotics <|cite_start|> (Reference: Kinematic transformations for remotely-actuated planar continuum robots: We consider a class of robotic manipulators generally termed "hyper-redundant". Specifically, we seek to examine some of the kinematic properties of "continuum" hyper-redundant robots. Unlike the case with rigid-link robots, there is no commonly accepted formula for describing continuum robot kinematics. Although these manipulators are continuously flexible, they are actuated with a finite number of actuators. We discuss two possible options for mapping desired infinite-dimensional robot shapes to the finite-dimensional actuator space, using "natural" and "wavelet" decompositions. We compare and contrast these kinematic descriptions, illustrating how the wavelet decomposition can simplify the inverse kinematics for redundant planar continuum robots.) <|cite_end|> <|cite_start|> (Reference: NOVEL KINEMATICS FOR CONTINUUM ROBOTS: ) <|cite_end|> and large-deflection dynamic model was used in their researches <|cite_start|> (Reference: Large deflection dynamics and control for planar continuum robots: This paper focuses on a class of robot manipulators termed "continuum" robots - robots that exhibit behavior similar to tentacles, trunks, and snakes. In previous work, we studied details of the mechanical design, kinematics, path-planning and small-deflection dynamics for continuum robots such as the Clemson "tentacle manipulator". In this paper, we discuss the dynamics of a planar continuum backbone section, incorporating a large-deflection dynamic model. Based on these dynamics, we formulate a vibration-damping setpoint controller, and include experimental results to illustrate the efficacy of the proposed controller.) <|cite_end|>. Considering the backbone of continuum robots as an elastic rod, Webster et al. <|cite_start|> (Reference: {Statics and dynamics of continuum robots with general tendon routing and external loading: Tendons are a widely used actuation strategy for continuum robots that enable forces and moments to be transmitted along the robot from base-mounted actuators. Most prior robots have used tendons routed in straight paths along the robot. However, routing tendons through general curved paths within the robot offers potential advantages in reshaping the workspace and enabling a single section of the robot to achieve a wider variety of desired shapes. In this paper, we provide a new model for the statics and dynamics of robots with general tendon routing paths that is derived by coupling the classical Cosserat-rod and Cosserat-string models. This model also accounts for general external loading conditions and includes traditional axially routed tendons as a special case. The advantage of the usage of this coupled model for straight-tendon robots is that it accounts for the distributed wrenches that tendons apply along the robot. We show that these are necessary to consider when the robot is subjected to out-of-plane external loads. Our experimental results demonstrate that the coupled model matches experimental tip positions with an error of 1.7% of the robot length, in a set of experiments that include both straight and nonstraight routing cases, with both point and distributed external loads.) <|cite_end|> and Mahvash et al. <|cite_start|> (Reference: Stiffness Control of Surgical Continuum Manipulators: This paper introduces the first stiffness controller for continuum robots. The control law is based on an accurate approximation of a continuum robot's coupled kinematic and static force model. To implement a desired tip stiffness, the controller drives the actuators to positions corresponding to a deflected robot configuration that produces the required tip force for the measured tip position. This approach provides several important advantages. First, it enables the use of robot deflection sensing as a means to both sense and control tip forces. Second, it enables stiffness control to be implemented by modification of existing continuum robot position controllers. The proposed controller is demonstrated experimentally in the context of a concentric tube robot. Results show that the stiffness controller achieves the desired stiffness in steady state, provides good dynamic performance, and exhibits stability during contact transitions.) <|cite_end|> have respectively applied Cosserat rod model in their researches. Although an increase in accuracy is found, solutions of those models, described in the form of nonlinear partial differential equations, are sensitive to parameters and time-consuming <|cite_start|> (Reference: Geometrically exact dynamic models for soft robotic manipulators: Unlike traditional rigid-linked robots, soft robotic manipulators can bend into a wide variety of complex shapes due to control inputs and gravitational loading. This paper presents a new approach for modeling the dynamics of soft robotic manipulators that incorporates the effect of material nonlinearities and distributed and payload weight and is geometrically exact for the large curvature, shear, torsion and extension that often occur in these manipulators. The model is based on the general Cosserat theory of rods and a fiber reinforced model of air muscle actuators. The model is validated experimentally on the OctArm V manipulator, showing less that 5% average error for a wide range of actuation pressures and base orientations as compared to almost 50% average error for the constant curvature model previously used by researchers.) <|cite_end|> <|cite_start|> (Reference: Comparison of modeling approaches for a tendon actuated continuum robot with three extensible segments: Continuum robots actuated by tendons are a widely researched robot design offering high dexterity and large workspaces relative to their volume. Their flexible and compliant structure can be easily miniaturized, making them predestined for applications in difficult-to-reach and confined spaces. Adaption of this specific robot design includes extensible segments leading to an even higher manipulability and enabling so-called follow-the-leader motions of the manipulator. In this letter, kinematic modeling for a tendon actuated continuum robot with three extensible segments is investigated. The focus is drawn on the comparison of two of the most widely used modeling approaches both for free-space and loaded configurations. Through extensive experimental validation, the modeling performances are assessed qualitatively and quantitatively in terms of the shape deviation, Euclidean error at segment ends, and computation time. While Cosserat rod modeling is slightly more accurate than beam mechanics modeling, the latter presents significantly lower computation time.) <|cite_end|>, which inevitably increases the complexity of the control issues in continuum robotics. <|paper_end|> | [
"<|reference_start|> Improving PILCO with bayesian neural network dynamics models: Model-based reinforcement learning (RL) allows an agent to discover good policies with a small number of trials by generalising observed transitions. Data efficiency can be further improved with a probabilistic model of the agent’s ignorance about the world, allowing it to choose actions under uncertainty. Bayesian modelling offers tools for this task, with PILCO [1] being a prominent example, achieving state-of-theart data efficiency on low dimensional RL benchmarks. But PILCO relies on Gaussian processes (GPs), which prohibits its applicability to problems that require a larger number of trials to be solved. Further, PILCO does not consider temporal correlation in model uncertainty between successive state transitions, which results in PILCO underestimating state uncertainty at future time steps [2]. In this paper we extend PILCO’s framework to use Bayesian deep dynamics models with approximate variational inference, allowing PILCO to scale linearly with number of trials and observation space dimensionality. Using particle methods we sample dynamics function realisations, and obtain lower cumulative cost than PILCO. We give insights into the modelling assumptions made in PILCO, and show that moment matching is a crucial simplifying assumption made by the model. Our implementation can leverage GPU architectures, offering faster running time than PILCO, and will allow structured observation spaces to be modelled (images or higher dimensional inputs) in the future. <|reference_end|>",
"<|reference_start|> Learning From Demonstration: By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and/or demonstrations of other humans. For teaming control, this paper investigates how learning from demonstration can be applied in the context of reinforcement learning. We consider priming the Q-function, the value function, the policy, and the model of the task dynamics as possible areas where demonstrations can speed up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing on a complex anthropomorphic robot arm, we demonstrate that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems. Using the suggested methods, the robot learns pole balancing in just a single trial after a 30 second long demonstration of the human instructor. <|reference_end|>",
"<|reference_start|> Task Transfer by Preference-Based Cost Learning: The goal of task transfer in reinforcement learning is migrating the action policy of an agent to the target task from the source task. Given their successes on robotic action planning, current methods mostly rely on two requirements: exactly-relevant expert demonstrations or the explicitly-coded cost function on target task, both of which, however, are inconvenient to obtain in practice. In this paper, we relax these two strong conditions by developing a novel task transfer framework where the expert preference is applied as a guidance. In particular, we alternate the following two steps: Firstly, letting experts apply pre-defined preference rules to select related expert demonstrates for the target task. Secondly, based on the selection result, we learn the target cost function and trajectory distribution simultaneously via enhanced Adversarial MaxEnt IRL and generate more trajectories by the learned target distribution for the next preference selection. The theoretical analysis on the distribution learning and convergence of the proposed algorithm are provided. Extensive simulations on several benchmarks have been conducted for further verifying the effectiveness of the proposed method. <|reference_end|>",
"<|reference_start|> Large deflection dynamics and control for planar continuum robots: This paper focuses on a class of robot manipulators termed \"continuum\" robots - robots that exhibit behavior similar to tentacles, trunks, and snakes. In previous work, we studied details of the mechanical design, kinematics, path-planning and small-deflection dynamics for continuum robots such as the Clemson \"tentacle manipulator\". In this paper, we discuss the dynamics of a planar continuum backbone section, incorporating a large-deflection dynamic model. Based on these dynamics, we formulate a vibration-damping setpoint controller, and include experimental results to illustrate the efficacy of the proposed controller. <|reference_end|>"
] | [
13,
21,
23,
37
] | {"<|cite_1|>": "ss-1369254", "<|cite_2|>": "arxiv-163928", "<|cite_3|>": "arxiv-137539", "<|cite_4|>": "ss-744782", "<|cite_5|>": "ss-785157", "<|cite_6|>": "ss-921916", "<|cite_7|>": "ss-1379898", "<|cite_8|>": "arxiv-150075", "<|cite_9|>": "arxiv-164779", "<|cite_10|>": "arxiv-164976", "<|cite_11|>": "arxiv-119650", "<|cite_12|>": "ss-998892", "<|cite_13|>": "arxiv-78927", "<|cite_14|>": "ss-1015329", "<|cite_15|>": "arxiv-160662", "<|cite_16|>": "arxiv-210563", "<|cite_17|>": "arxiv-164976", "<|cite_19|>": "arxiv-180650", "<|cite_20|>": "arxiv-63505", "<|cite_21|>": "arxiv-135195", "<|cite_22|>": "ss-805807", "<|cite_23|>": "ss-1363959", "<|cite_24|>": "ss-921172", "<|cite_25|>": "arxiv-158204", "<|cite_26|>": "arxiv-17086", "<|cite_27|>": "ss-1313569", "<|cite_28|>": "ss-763952", "<|cite_29|>": "arxiv-234531", "<|cite_30|>": "ss-1458271", "<|cite_31|>": "ss-1171497", "<|cite_32|>": "ss-718437", "<|cite_33|>": "ss-1369254", "<|cite_34|>": "ss-2061284", "<|cite_35|>": "ss-1221577", "<|cite_36|>": "ss-1814615", "<|cite_37|>": "ss-1814616", "<|cite_38|>": "ss-1814617", "<|cite_39|>": "ss-1221577", "<|cite_40|>": "ss-755905", "<|cite_41|>": "ss-1471185", "<|cite_42|>": "ss-1814615", "<|cite_43|>": "ss-1814618"} |
2312.03517-0 | <|paper_start|> Title: FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models
Abstract: FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models: The substantial computational costs of diffusion models, especially due to the repeated denoising steps necessary for high-quality image generation, present a major obstacle to their widespread adoption. While several studies have attempted to address this issue by reducing the number of score function evaluations (NFE) using advanced ODE solvers without fine-tuning, the decreased number of denoising iterations misses the opportunity to update fine details, resulting in noticeable quality degradation. In our work, we introduce an advanced acceleration technique that leverages the temporal redundancy inherent in diffusion models. Reusing feature maps with high temporal similarity opens up a new opportunity to save computation resources without compromising output quality. To realize the practical benefits of this intuition, we conduct an extensive analysis and propose a novel method, FRDiff. FRDiff is designed to harness the advantages of both reduced NFE and feature reuse, achieving a Pareto frontier that balances fidelity and latency trade-offs in various generative tasks.
Introduction
\label{sec:intro}
The diffusion model has gained attention for its high-quality and diverse image generation capabilities <|cite_start|> (Reference: Zero-Shot Text-to-Image Generation: Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.) <|cite_end|> <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|> <|cite_start|> (Reference: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding: We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results.) <|cite_end|> <|cite_start|> (Reference: SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights at https://github.com/Stability-AI/generative-models) <|cite_end|>. Its outstanding quality and versatility unlock new potentials in various applications, including image restoration <|cite_start|> (Reference: Exploiting Diffusion Prior for Real-World Image Super-Resolution: We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution (SR). Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.) <|cite_end|> <|cite_start|> (Reference: Diffbir: Towards blind image restoration with generative diffusion prior.: We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks in a unified framework. DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content. Each stage is developed independently but they work seamlessly in a cascaded manner. In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results. For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details. Specifically, IRControlNet is trained based on specially produced condition images without distracting noisy content for stable generation performance. Moreover, we design a region-adaptive restoration guidance that can modify the denoising process during inference without model re-training, allowing users to balance realness and fidelity through a tunable guidance scale. Extensive experiments have demonstrated DiffBIR's superiority over state-of-the-art approaches for blind image super-resolution, blind face restoration and blind image denoising tasks on both synthetic and real-world datasets. The code is available at https://github.com/XPixelGroup/DiffBIR.) <|cite_end|>, image editing <|cite_start|> (Reference: InstructPix2Pix: Learning to Follow Image Editing Instructions: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.) <|cite_end|> <|cite_start|> (Reference: Delta Denoising Score: We introduce Delta Denoising Score (DDS), a novel scoring function for text-based image editing that guides minimal modifications of an input image towards the content described in a target prompt. DDS leverages the rich generative prior of text-to-image diffusion models and can be used as a loss term in an optimization problem to steer an image towards a desired direction dictated by a text. DDS utilizes the Score Distillation Sampling (SDS) mechanism for the purpose of image editing. We show that using only SDS often produces non-detailed and blurry outputs due to noisy gradients. To address this issue, DDS uses a prompt that matches the input image to identify and remove undesired erroneous directions of SDS. Our key premise is that SDS should be zero when calculated on pairs of matched prompts and images, meaning that if the score is non-zero, its gradients can be attributed to the erroneous component of SDS. Our analysis demonstrates the competence of DDS for text based image-to-image translation. We further show that DDS can be used to train an effective zero-shot image translation model. Experimental results indicate that DDS outperforms existing methods in terms of stability and quality, highlighting its potential for real-world applications in text-based image editing.) <|cite_end|> <|cite_start|> (Reference: Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation: Large-scale text-to-image generative models have been a revolutionary breakthrough in the evolution of generative AI, allowing us to synthesize diverse images that convey highly complex visual concepts. However, a pivotal challenge in leveraging such models for real-world content creation tasks is providing users with control over the generated content. In this paper, we present a new framework that takes text-to-image synthesis to the realm of image-to-image translation -- given a guidance image and a target text prompt, our method harnesses the power of a pre-trained text-to-image diffusion model to generate a new image that complies with the target text, while preserving the semantic layout of the source image. Specifically, we observe and empirically demonstrate that fine-grained control over the generated structure can be achieved by manipulating spatial features and their self-attention inside the model. This results in a simple and effective approach, where features extracted from the guidance image are directly injected into the generation process of the target image, requiring no training or fine-tuning and applicable for both real or generated guidance images. We demonstrate high-quality results on versatile text-guided image translation tasks, including translating sketches, rough drawings and animations into realistic images, changing of the class and appearance of objects in a given image, and modifications of global qualities such as lighting and color.) <|cite_end|> <|cite_start|> (Reference: MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating the Diffusion Path: Image generation using diffusion can be controlled in multiple ways. In this paper, we systematically analyze the equations of modern generative diffusion networks to propose a framework, called MDP, that explains the design space of suitable manipulations. We identify 5 different manipulations, including intermediate latent, conditional embedding, cross attention maps, guidance, and predicted noise. We analyze the corresponding parameters of these manipulations and the manipulation schedule. We show that some previous editing methods fit nicely into our framework. Particularly, we identified one specific configuration as a new type of control by manipulating the predicted noise, which can perform higher-quality edits than previous work for a variety of local and global edits.) <|cite_end|> <|cite_start|> (Reference: PRedItOR: Text Guided Image Editing with Diffusion Prior: Diffusion models have shown remarkable capabilities in generating high quality and creative images conditioned on text. An interesting application of such models is structure preserving text guided image editing. Existing approaches rely on text conditioned diffusion models such as Stable Diffusion or Imagen and require compute intensive optimization of text embeddings or fine-tuning the model weights for text guided image editing. We explore text guided image editing with a Hybrid Diffusion Model (HDM) architecture similar to DALLE-2. Our architecture consists of a diffusion prior model that generates CLIP image embedding conditioned on a text prompt and a custom Latent Diffusion Model trained to generate images conditioned on CLIP image embedding. We discover that the diffusion prior model can be used to perform text guided conceptual edits on the CLIP image embedding space without any finetuning or optimization. We combine this with structure preserving edits on the image decoder using existing approaches such as reverse DDIM to perform text guided image editing. Our approach, PRedItOR does not require additional inputs, fine-tuning, optimization or objectives and shows on par or better results than baselines qualitatively and quantitatively. We provide further analysis and understanding of the diffusion prior model and believe this opens up new possibilities in diffusion models research.) <|cite_end|>, conditional image synthesis <|cite_start|> (Reference: Universal Guidance for Diffusion Models: Typical diffusion models are trained to accept a particular form of conditioning, most commonly text, and cannot be conditioned on other modalities without retraining. In this work, we propose a universal guidance algorithm that enables diffusion models to be controlled by arbitrary guidance modalities without the need to retrain any use-specific components. We show that our algorithm successfully generates quality images with guidance functions including segmentation, face recognition, object detection, and classifier signals. Code is available at https://github.com/arpitbansal297/Universal-Guided-Diffusion.) <|cite_end|> <|cite_start|> (Reference: Modulating Pretrained Diffusion Models for Multimodal Image Synthesis: We present multimodal conditioning modules (MCM) for enabling conditional image synthesis using pretrained diffusion models. Previous multimodal synthesis works rely on training networks from scratch or fine-tuning pretrained networks, both of which are computationally expensive for large, state-of-the-art diffusion models. Our method uses pretrained networks but \textit{does not require any updates to the diffusion network's parameters}. MCM is a small module trained to modulate the diffusion network's predictions during sampling using 2D modalities (e.g., semantic segmentation maps, sketches) that were unseen during the original training of the diffusion model. We show that MCM enables user control over the spatial layout of the image and leads to increased control over the image generation process. Training MCM is cheap as it does not require gradients from the original diffusion net, consists of only $\sim$1$\%$ of the number of parameters of the base diffusion model, and is trained using only a limited number of training examples. We evaluate our method on unconditional and text-conditional models to demonstrate the improved control over the generated images and their alignment with respect to the conditioning inputs.) <|cite_end|> <|cite_start|> (Reference: LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts: Thanks to the rapid development of diffusion models, unprecedented progress has been witnessed in image synthesis. Prior works mostly rely on pre-trained linguistic models, but a text is often too abstract to properly specify all the spatial properties of an image, e.g., the layout configuration of a scene, leading to the sub-optimal results of complex scene generation. In this paper, we achieve accurate complex scene generation by proposing a semantically controllable Layout-AWare diffusion model, termed LAW-Diffusion. Distinct from the previous Layout-to-Image generation (L2I) methods that only explore category-aware relationships, LAW-Diffusion introduces a spatial dependency parser to encode the location-aware semantic coherence across objects as a layout embedding and produces a scene with perceptually harmonious object styles and contextual relations. To be specific, we delicately instantiate each object's regional semantics as an object region map and leverage a location-aware cross-object attention module to capture the spatial dependencies among those disentangled representations. We further propose an adaptive guidance schedule for our layout guidance to mitigate the trade-off between the regional semantic alignment and the texture fidelity of generated objects. Moreover, LAW-Diffusion allows for instance reconfiguration while maintaining the other regions in a synthesized image by introducing a layout-aware latent grafting mechanism to recompose its local regional semantics. To better verify the plausibility of generated scenes, we propose a new evaluation metric for the L2I task, dubbed Scene Relation Score (SRS) to measure how the images preserve the rational and harmonious relations among contextual objects. Comprehensive experiments demonstrate that our LAW-Diffusion yields the state-of-the-art generative performance, especially with coherent object relations.) <|cite_end|> <|cite_start|> (Reference: Adding Conditional Control to Text-to-Image Diffusion Models: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.) <|cite_end|> <|cite_start|> (Reference: BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion: Recent text-to-image diffusion models have demonstrated an astonishing capacity to generate high-quality images. However, researchers mainly studied the way of synthesizing images with only text prompts. While some works have explored using other modalities as conditions, considerable paired data, e.g., box/mask-image pairs, and fine-tuning time are required for nurturing models. As such paired data is time-consuming and labor-intensive to acquire and restricted to a closed set, this potentially becomes the bottleneck for applications in an open world. This paper focuses on the simplest form of user-provided conditions, e.g., box or scribble. To mitigate the aforementioned problem, we propose a training-free method to control objects and contexts in the synthesized images adhering to the given spatial conditions. Specifically, three spatial constraints, i.e., Inner-Box, Outer-Box, and Corner Constraints, are designed and seamlessly integrated into the denoising step of diffusion models, requiring no additional training and massive annotated layout data. Extensive experimental results demonstrate that the proposed constraints can control what and where to present in the images while retaining the ability of Diffusion models to synthesize with high fidelity and diverse concept coverage. The code is publicly available at https://github.com/showlab/BoxDiff.) <|cite_end|> <|cite_start|> (Reference: FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model: Recently, conditional diffusion models have gained popularity in numerous applications due to their exceptional generation ability. However, many existing methods are training-required. They need to train a time-dependent classifier or a condition-dependent score estimator, which increases the cost of constructing conditional diffusion models and is inconvenient to transfer across different conditions. Some current works aim to overcome this limitation by proposing training-free solutions, but most can only be applied to a specific category of tasks and not to more general conditions. In this work, we propose a training-Free conditional Diffusion Model (FreeDoM) used for various conditions. Specifically, we leverage off-the-shelf pre-trained networks, such as a face detection model, to construct time-independent energy functions, which guide the generation process without requiring training. Furthermore, because the construction of the energy function is very flexible and adaptable to various conditions, our proposed FreeDoM has a broader range of applications than existing training-free methods. FreeDoM is advantageous in its simplicity, effectiveness, and low cost. Experiments demonstrate that FreeDoM is effective for various conditions and suitable for diffusion models of diverse data domains, including image and latent code domains.) <|cite_end|> <|cite_start|> (Reference: T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to ``dig out" the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications.) <|cite_end|>, and more. However, the expensive computation cost of the diffusion model, particulary due to its dozens to hundreds of denoising steps for high-quality image generation, poses a significant obstacle to its widespread adoption. For example, while a GAN <|cite_start|> (Reference: Generative Adversarial Networks: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.) <|cite_end|>can generate 50k images of size 32x32 in less than a minute, the diffusion model takes approximately 20 hours. To fully harness the benefits of diffusion models in practice, this performance drawback must be addressed.
\begin{figure}
\centering
\begin{subfigure}[t]{0.90\columnwidth}
\includegraphics[width=\textwidth]{figures/compare_ours3.png}
\end{subfigure}
\caption{FID-Latency trade-off by using DDIM <|cite_start|> (Reference: Denoising Diffusion Implicit Models: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.) <|cite_end|>and Ours in CelebA-HQ dataset. Our method can further accelerate advanced ODE solver like DDIM, especially in low-FID region.}
\end{figure}
Recently, many studies have proposed methods to reduce the computational cost of diffusion models. A representative approach involves a zero-shot sampling method <|cite_start|> (Reference: Denoising Diffusion Implicit Models: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.) <|cite_end|> <|cite_start|> (Reference: Pseudo Numerical Methods for Diffusion Models on Manifolds: Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. Our implementation is available at https://github.com/luping-liu/PNDM.) <|cite_end|> <|cite_start|> (Reference: Score-Based Generative Modeling through Stochastic Differential Equations: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.) <|cite_end|>, which typically employs advanced ODE or SDE solvers capable of maintaining quality with a reduced number of score function evaluations (NFE). They demonstrate the potential for acceleration without fine-tuning, but performance improvement achievable within the accuracy margin is not sufficient. On the other hand, there is another direction that utilizes a learning-based sampling method <|cite_start|> (Reference: Progressive Distillation for Fast Sampling of Diffusion Models: Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.) <|cite_end|> <|cite_start|> (Reference: On Distillation of Guided Diffusion Models: Classifier-free guided diffusion models have recently been shown to be highly effective at high-resolution image generation, and they have been widely used in large-scale diffusion frameworks including DALLE-2, Stable Diffusion and Imagen. However, a downside of classifier-free guided diffusion models is that they are computationally expensive at inference time since they require evaluating two diffusion models, a class-conditional model and an unconditional model, tens to hundreds of times. To deal with this limitation, we propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from: Given a pre-trained classifier-free guided model, we first learn a single model to match the output of the combined conditional and unconditional models, and then we progressively distill that model to a diffusion model that requires much fewer sampling steps. For standard diffusion models trained on the pixel-space, our approach is able to generate images visually comparable to that of the original model using as few as 4 sampling steps on ImageNet 64x64 and CIFAR-10, achieving FID/IS scores comparable to that of the original model while being up to 256 times faster to sample from. For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps, accelerating inference by at least 10-fold compared to existing methods on ImageNet 256x256 and LAION datasets. We further demonstrate the effectiveness of our approach on text-guided image editing and inpainting, where our distilled model is able to generate high-quality results using as few as 2-4 denoising steps.) <|cite_end|> <|cite_start|> (Reference: Consistency Models: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.) <|cite_end|> <|cite_start|> (Reference: Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow: We present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models to transport between two empirically observed distributions \pi_0 and \pi_1, hence providing a unified solution to generative modeling and domain transfer, among various other tasks involving distribution transport. The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible. This is achieved by solving a straightforward nonlinear least squares optimization problem, which can be easily scaled to large models without introducing extra parameters beyond standard supervised learning. The straight paths are special and preferred because they are the shortest paths between two points, and can be simulated exactly without time discretization and hence yield computationally efficient models. We show that the procedure of learning a rectified flow from data, called rectification, turns an arbitrary coupling of \pi_0 and \pi_1 to a new deterministic coupling with provably non-increasing convex transport costs. In addition, recursively applying rectification allows us to obtain a sequence of flows with increasingly straight paths, which can be simulated accurately with coarse time discretization in the inference phase. In empirical studies, we show that rectified flow performs superbly on image generation, image-to-image translation, and domain adaptation. In particular, on image generation and translation, our method yields nearly straight flows that give high quality results even with a single Euler discretization step.) <|cite_end|>, applying fine-tuning to maintain generation quality with a reduced NFE. However, the requirement of fine-tuning, such as additional resources and a complex training pipeline, make it challenging to use in practice. To realize practical benefits with minimal constraints, we need more advanced zero-shot methods with higher potential.
In this work, we focus on an important aspect of the diffusion model that has not been utilized in previous studies. Since diffusion models involve iterative denoising operations, \textit{the feature maps within the diffusion model exhibit temporal redundancy}. According to our extensive analysis, specific modules within diffusion models show high similarity in their feature maps across adjacent frames. By reusing these intermediate feature maps with higher temporal similarity, we can significantly reduce computation overhead while maintaining output quality. Building on this insight, we propose a new optimization approach, named feature reuse (FR).
However, the naive use of FR doesn't guarantee superior performance compared to the conventional reduced NFE method. Our thorough experiments reveal that FR has distinctive characteristics compared to the reduced NFE method, and both methods can be used in harmony to compensate for each other's drawbacks and maximize the benefits we can achieve. Overall, \textit{we propose a comprehensive method named FRDiff, designed to harness the strengths of both low NFE and FR}. In particular, we introduce a score mixing method that enables FRDiff to generate high-quality output with fine details while reducing computation overhead. This approach can be applied to any diffusion model without the need for fine-tuning in existing frameworks with minimal modification.
We conduct extensive experiments to validate the effectiveness of FRDiff on various tasks in a zero-shot manner. We can achieve up to a \textbf{1.8x} acceleration without compromising output quality across a range of tasks, including a task-agnostic pretrained model for text-to-image generation, as well as task-specific fine-tuned models for super resolution and image inpainting.
Related Work
\label{sec:related}
\subsection{Diffusion Models}
The diffusion model, introduced in <|cite_start|> (Reference: Deep Unsupervised Learning using Nonequilibrium Thermodynamics: A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm.) <|cite_end|>, defines the forward diffusion process by gradually adding Gaussian noise at each time step. Conversely, the reverse process generates a clean image from random noise by gradually removing noise from the data.
In DDPM <|cite_start|> (Reference: Denoising Diffusion Probabilistic Models: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at https://github.com/hojonathanho/diffusion) <|cite_end|>, the authors simplified the diffusion process using a noise prediction network $\epsilon_\theta(x_t,t)$ and reparameterized the complex ELBO loss <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|>into a straightforward noise matching loss.
On the other hand, in <|cite_start|> (Reference: Score-Based Generative Modeling through Stochastic Differential Equations: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.) <|cite_end|>, it was proposed that the forward process of the diffusion model can be transformed into a Stochastic Differential Equation (SDE). They also identified a corresponding reverse SDE for the reverse process of the diffusion model. Furthermore, it was revealed that the noise prediction is equivalent to the score of the data distribution, $\nabla_x p(x)$. Recently, Classifier-Free Guidance (CFG) <|cite_start|> (Reference: Classifier-Free Diffusion Guidance: Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier guidance combines the score estimate of a diffusion model with the gradient of an image classifier and thereby requires training an image classifier separate from the diffusion model. It also raises the question of whether guidance can be performed without a classifier. We show that guidance can be indeed performed by a pure generative model without such a classifier: in what we call classifier-free guidance, we jointly train a conditional and an unconditional diffusion model, and we combine the resulting conditional and unconditional score estimates to attain a trade-off between sample quality and diversity similar to that obtained using classifier guidance.) <|cite_end|>has been introduced to guide the score toward a specific condition $c$. In the CFG sampling process, the score is represented as a linear combination of unconditional and conditional scores.
As the proposed FRDiff is designed based on the temporal redundancy resulting from the iterative nature of the diffusion process, it can be applied to all the aforementioned methods, providing significant performance benefits regardless of their specific details.
\subsection{Diffusion Model Optimization}
Numerous studies have aimed to address the slow generation speed of the diffusion models. The primary factor of this drawback is the iterative denoising process that requires a large NFE. Consequently, many studies have focused on reducing NFE, which can be broadly categorized into two groups: zero-shot sampling <|cite_start|> (Reference: Denoising Diffusion Implicit Models: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.) <|cite_end|> <|cite_start|> (Reference: gDDIM: Generalized denoising diffusion implicit models: Our goal is to extend the denoising diffusion implicit model (DDIM) to general diffusion models~(DMs) besides isotropic diffusions. Instead of constructing a non-Markov noising process as in the original DDIM, we examine the mechanism of DDIM from a numerical perspective. We discover that the DDIM can be obtained by using some specific approximations of the score when solving the corresponding stochastic differential equation. We present an interpretation of the accelerating effects of DDIM that also explains the advantages of a deterministic sampling scheme over the stochastic one for fast sampling. Building on this insight, we extend DDIM to general DMs, coined generalized DDIM (gDDIM), with a small but delicate modification in parameterizing the score network. We validate gDDIM in two non-isotropic DMs: Blurring diffusion model (BDM) and Critically-damped Langevin diffusion model (CLD). We observe more than 20 times acceleration in BDM. In the CLD, a diffusion model by augmenting the diffusion process with velocity, our algorithm achieves an FID score of 2.26, on CIFAR10, with only 50 number of score function evaluations~(NFEs) and an FID score of 2.86 with only 27 NFEs. Code is available at https://github.com/qsh-zh/gDDIM) <|cite_end|> <|cite_start|> (Reference: Pseudo Numerical Methods for Diffusion Models on Manifolds: Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. Our implementation is available at https://github.com/luping-liu/PNDM.) <|cite_end|> <|cite_start|> (Reference: Elucidating the Design Space of Diffusion-Based Generative Models: We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36.) <|cite_end|> <|cite_start|> (Reference: DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps: Diffusion probabilistic models (DPMs) are emerging powerful generative models. Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. Sampling from DPMs can be viewed alternatively as solving the corresponding diffusion ordinary differential equations (ODEs). In this work, we propose an exact formulation of the solution of diffusion ODEs. The formulation analytically computes the linear part of the solution, rather than leaving all terms to black-box ODE solvers as adopted in previous works. By applying change-of-variable, the solution can be equivalently simplified to an exponentially weighted integral of the neural network. Based on our formulation, we propose DPM-Solver, a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. DPM-Solver is suitable for both discrete-time and continuous-time DPMs without any further training. Experimental results show that DPM-Solver can generate high-quality samples in only 10 to 20 function evaluations on various datasets. We achieve 4.70 FID in 10 function evaluations and 2.87 FID in 20 function evaluations on the CIFAR10 dataset, and a $4\sim 16\times$ speedup compared with previous state-of-the-art training-free samplers on various datasets.) <|cite_end|> <|cite_start|> (Reference: Restart Sampling for Improving Generative Processes: Generative processes that involve solving differential equations, such as diffusion models, frequently necessitate balancing speed and quality. ODE-based samplers are fast but plateau in performance while SDE-based samplers deliver higher sample quality at the cost of increased sampling time. We attribute this difference to sampling errors: ODE-samplers involve smaller discretization errors while stochasticity in SDE contracts accumulated errors. Based on these findings, we propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy. Restart not only outperforms the previous best SDE results, but also accelerates the sampling speed by 10-fold / 2-fold on CIFAR-10 / ImageNet $64 \times 64$. In addition, it attains significantly better sample quality than ODE samplers within comparable sampling times. Moreover, Restart better balances text-image alignment/visual quality versus diversity than previous samplers in the large-scale text-to-image Stable Diffusion model pre-trained on LAION $512 \times 512$. Code is available at https://github.com/Newbeeer/diffusion_restart_sampling) <|cite_end|> <|cite_start|> (Reference: DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models: Diffusion probabilistic models (DPMs) have achieved impressive success in high-resolution image synthesis, especially in recent large-scale text-to-image generation applications. An essential technique for improving the sample quality of DPMs is guided sampling, which usually needs a large guidance scale to obtain the best sample quality. The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples. Although recent works propose dedicated high-order solvers and achieve a further speedup for sampling without guidance, their effectiveness for guided sampling has not been well-tested before. In this work, we demonstrate that previous high-order fast samplers suffer from instability issues, and they even become slower than DDIM when the guidance scale grows large. To further speed up guided sampling, we propose DPM-Solver++, a high-order solver for the guided sampling of DPMs. DPM-Solver++ solves the diffusion ODE with the data prediction model and adopts thresholding methods to keep the solution matches training data distribution. We further propose a multistep variant of DPM-Solver++ to address the instability issue by reducing the effective step size. Experiments show that DPM-Solver++ can generate high-quality samples within only 15 to 20 steps for guided sampling by pixel-space and latent-space DPMs.) <|cite_end|>, which apply optimization to the pre-trained model, and learning-based sampling <|cite_start|> (Reference: Progressive Distillation for Fast Sampling of Diffusion Models: Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.) <|cite_end|> <|cite_start|> (Reference: On Distillation of Guided Diffusion Models: Classifier-free guided diffusion models have recently been shown to be highly effective at high-resolution image generation, and they have been widely used in large-scale diffusion frameworks including DALLE-2, Stable Diffusion and Imagen. However, a downside of classifier-free guided diffusion models is that they are computationally expensive at inference time since they require evaluating two diffusion models, a class-conditional model and an unconditional model, tens to hundreds of times. To deal with this limitation, we propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from: Given a pre-trained classifier-free guided model, we first learn a single model to match the output of the combined conditional and unconditional models, and then we progressively distill that model to a diffusion model that requires much fewer sampling steps. For standard diffusion models trained on the pixel-space, our approach is able to generate images visually comparable to that of the original model using as few as 4 sampling steps on ImageNet 64x64 and CIFAR-10, achieving FID/IS scores comparable to that of the original model while being up to 256 times faster to sample from. For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps, accelerating inference by at least 10-fold compared to existing methods on ImageNet 256x256 and LAION datasets. We further demonstrate the effectiveness of our approach on text-guided image editing and inpainting, where our distilled model is able to generate high-quality results using as few as 2-4 denoising steps.) <|cite_end|> <|cite_start|> (Reference: Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: https://latent-consistency-models.github.io/) <|cite_end|> <|cite_start|> (Reference: Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow: We present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models to transport between two empirically observed distributions \pi_0 and \pi_1, hence providing a unified solution to generative modeling and domain transfer, among various other tasks involving distribution transport. The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible. This is achieved by solving a straightforward nonlinear least squares optimization problem, which can be easily scaled to large models without introducing extra parameters beyond standard supervised learning. The straight paths are special and preferred because they are the shortest paths between two points, and can be simulated exactly without time discretization and hence yield computationally efficient models. We show that the procedure of learning a rectified flow from data, called rectification, turns an arbitrary coupling of \pi_0 and \pi_1 to a new deterministic coupling with provably non-increasing convex transport costs. In addition, recursively applying rectification allows us to obtain a sequence of flows with increasingly straight paths, which can be simulated accurately with coarse time discretization in the inference phase. In empirical studies, we show that rectified flow performs superbly on image generation, image-to-image translation, and domain adaptation. In particular, on image generation and translation, our method yields nearly straight flows that give high quality results even with a single Euler discretization step.) <|cite_end|>, which entails an additional fine-tuning.
Zero-shot sampling methods typically employ advanced Ordinary Differential Equation (ODE) solvers capable of maintaining generation quality even with a reduced NFE. For instance, DDIM <|cite_start|> (Reference: Denoising Diffusion Implicit Models: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.) <|cite_end|>successfully reduced NFE by extending the original DDPM to a non-Markovian setting and eliminating the stochastic process.
Furthermore, methods that utilize Pseudo Numerical methods <|cite_start|> (Reference: Pseudo Numerical Methods for Diffusion Models on Manifolds: Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. Our implementation is available at https://github.com/luping-liu/PNDM.) <|cite_end|>, Second-order methods <|cite_start|> (Reference: Elucidating the Design Space of Diffusion-Based Generative Models: We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36.) <|cite_end|>, and Semi-Linear structures <|cite_start|> (Reference: DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps: Diffusion probabilistic models (DPMs) are emerging powerful generative models. Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. Sampling from DPMs can be viewed alternatively as solving the corresponding diffusion ordinary differential equations (ODEs). In this work, we propose an exact formulation of the solution of diffusion ODEs. The formulation analytically computes the linear part of the solution, rather than leaving all terms to black-box ODE solvers as adopted in previous works. By applying change-of-variable, the solution can be equivalently simplified to an exponentially weighted integral of the neural network. Based on our formulation, we propose DPM-Solver, a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. DPM-Solver is suitable for both discrete-time and continuous-time DPMs without any further training. Experimental results show that DPM-Solver can generate high-quality samples in only 10 to 20 function evaluations on various datasets. We achieve 4.70 FID in 10 function evaluations and 2.87 FID in 20 function evaluations on the CIFAR10 dataset, and a $4\sim 16\times$ speedup compared with previous state-of-the-art training-free samplers on various datasets.) <|cite_end|> <|cite_start|> (Reference: DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models: Diffusion probabilistic models (DPMs) have achieved impressive success in high-resolution image synthesis, especially in recent large-scale text-to-image generation applications. An essential technique for improving the sample quality of DPMs is guided sampling, which usually needs a large guidance scale to obtain the best sample quality. The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples. Although recent works propose dedicated high-order solvers and achieve a further speedup for sampling without guidance, their effectiveness for guided sampling has not been well-tested before. In this work, we demonstrate that previous high-order fast samplers suffer from instability issues, and they even become slower than DDIM when the guidance scale grows large. To further speed up guided sampling, we propose DPM-Solver++, a high-order solver for the guided sampling of DPMs. DPM-Solver++ solves the diffusion ODE with the data prediction model and adopts thresholding methods to keep the solution matches training data distribution. We further propose a multistep variant of DPM-Solver++ to address the instability issue by reducing the effective step size. Experiments show that DPM-Solver++ can generate high-quality samples within only 15 to 20 steps for guided sampling by pixel-space and latent-space DPMs.) <|cite_end|>have been proposed to achieve better performance. Learning-based sampling finetunes the model to perform effectively with fewer NFE. For example, Progressive Distillation <|cite_start|> (Reference: Progressive Distillation for Fast Sampling of Diffusion Models: Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.) <|cite_end|>distills a student model to achieve the same performance with half of NFE. Recently, the consistency model <|cite_start|> (Reference: Consistency Models: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.) <|cite_end|> <|cite_start|> (Reference: Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: https://latent-consistency-models.github.io/) <|cite_end|>successfully reduced NFE to 1-4 by predicting the trajectory of the ODE.
In addition, there are studies aimed at optimizing the backbone architecture of the diffusion model. These studies involve proposing new diffusion model structures <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|>, as well as lightweighting the model's operations through techniques such as pruning <|cite_start|> (Reference: Structural Pruning for Diffusion Models: Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs). The impressive capability of these models, however, often entails significant computational overhead during both training and inference. To tackle this challenge, we present Diff-Pruning, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones, without the need for extensive re-training. The essence of Diff-Pruning is encapsulated in a Taylor expansion over pruned timesteps, a process that disregards non-contributory diffusion steps and ensembles informative gradients to identify important weights. Our empirical assessment, undertaken across several datasets highlights two primary benefits of our proposed method: 1) Efficiency: it enables approximately a 50\% reduction in FLOPs at a mere 10\% to 20\% of the original training expenditure; 2) Consistency: the pruned diffusion models inherently preserve generative behavior congruent with their pre-trained models. Code is available at \url{https://github.com/VainF/Diff-Pruning}.) <|cite_end|>, quantization <|cite_start|> (Reference: Q-Diffusion: Quantizing Diffusion Models: Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with timestep-aware calibration and split shortcut quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time.) <|cite_end|> <|cite_start|> (Reference: Temporal Dynamic Quantization for Diffusion Models: The diffusion model has gained popularity in vision applications due to its remarkable generative performance and versatility. However, high storage and computation demands, resulting from the model size and iterative generation, hinder its use on mobile devices. Existing quantization techniques struggle to maintain performance even in 8-bit precision due to the diffusion model's unique property of temporal variation in activation. We introduce a novel quantization method that dynamically adjusts the quantization interval based on time step information, significantly improving output quality. Unlike conventional dynamic quantization techniques, our approach has no computational overhead during inference and is compatible with both post-training quantization (PTQ) and quantization-aware training (QAT). Our extensive experiments demonstrate substantial improvements in output quality with the quantized diffusion model across various datasets.) <|cite_end|>, and attention acceleration | [
"<|reference_start|> Pseudo Numerical Methods for Diffusion Models on Manifolds: Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. Our implementation is available at https://github.com/luping-liu/PNDM. <|reference_end|>",
"<|reference_start|> Progressive Distillation for Fast Sampling of Diffusion Models: Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time. <|reference_end|>",
"<|reference_start|> Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. <|reference_end|>",
"<|reference_start|> Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: https://latent-consistency-models.github.io/ <|reference_end|>"
] | [
21,
23,
29,
41
] | {"<|multi_cite_1_1|>": "arxiv-323257", "<|multi_cite_1_2|>": "arxiv-388766", "<|multi_cite_1_3|>": "arxiv-421636", "<|multi_cite_1_4|>": "arxiv-520857", "<|multi_cite_2_1|>": "arxiv-504167", "<|multi_cite_2_2|>": "ss-1355089", "<|multi_cite_3_1|>": "arxiv-462992", "<|multi_cite_3_2|>": "arxiv-497072", "<|multi_cite_3_3|>": "arxiv-464281", "<|multi_cite_3_4|>": "arxiv-493058", "<|multi_cite_3_5|>": "arxiv-481799", "<|multi_cite_4_1|>": "arxiv-481441", "<|multi_cite_4_2|>": "arxiv-484056", "<|multi_cite_4_3|>": "arxiv-530758", "<|multi_cite_4_4|>": "arxiv-480705", "<|multi_cite_4_5|>": "arxiv-524956", "<|multi_cite_4_6|>": "arxiv-489680", "<|multi_cite_4_7|>": "arxiv-482019", "<|cite_5|>": "arxiv-62064", "<|cite_6|>": "arxiv-294169", "<|multi_cite_7_1|>": "arxiv-294169", "<|multi_cite_7_2|>": "arxiv-400333", "<|multi_cite_7_3|>": "arxiv-306081", "<|multi_cite_8_1|>": "arxiv-396254", "<|multi_cite_8_2|>": "arxiv-451681", "<|multi_cite_8_3|>": "arxiv-485714", "<|multi_cite_8_4|>": "arxiv-444717", "<|cite_9|>": "arxiv-74487", "<|cite_10|>": "arxiv-273164", "<|cite_11|>": "arxiv-54350", "<|cite_12|>": "arxiv-306081", "<|cite_13|>": "arxiv-436366", "<|multi_cite_14_1|>": "arxiv-294169", "<|multi_cite_14_2|>": "arxiv-426386", "<|multi_cite_14_3|>": "arxiv-400333", "<|multi_cite_14_4|>": "arxiv-424010", "<|multi_cite_14_5|>": "arxiv-424264", "<|multi_cite_14_6|>": "arxiv-518716", "<|multi_cite_14_7|>": "arxiv-459159", "<|multi_cite_15_1|>": "arxiv-396254", "<|multi_cite_15_2|>": "arxiv-451681", "<|multi_cite_15_3|>": "arxiv-546336", "<|multi_cite_15_4|>": "arxiv-444717", "<|cite_16|>": "arxiv-294169", "<|cite_17|>": "arxiv-400333", "<|cite_18|>": "arxiv-424010", "<|multi_cite_19_1|>": "arxiv-424264", "<|multi_cite_19_2|>": "arxiv-459159", "<|cite_20|>": "arxiv-396254", "<|multi_cite_21_1|>": "arxiv-485714", "<|multi_cite_21_2|>": "arxiv-546336", "<|multi_cite_22_1|>": "arxiv-388766", "<|cite_23|>": "arxiv-506081", "<|multi_cite_24_1|>": "arxiv-480141", "<|multi_cite_24_2|>": "arxiv-512572", "<|cite_25|>": "arxiv-493424", "<|cite_26|>": "arxiv-77905", "<|cite_27|>": "arxiv-88870", "<|cite_28|>": "arxiv-126595"} |
2406.06140 | <|paper_start|> Title: Can I understand what I create? Self-Knowledge Evaluation of Large Language Models
Abstract: Can I understand what I create? Self-Knowledge Evaluation of Large Language Models: Large language models (LLMs) have achieved remarkable progress in linguistic tasks, necessitating robust evaluation frameworks to understand their capabilities and limitations. Inspired by Feynman's principle of understanding through creation, we introduce a self-knowledge evaluation framework that is easy to implement, evaluating models on their ability to comprehend and respond to self-generated questions. Our findings, based on testing multiple models across diverse tasks, reveal significant gaps in the model's self-knowledge ability. Further analysis indicates these gaps may be due to misalignment with human attention mechanisms. Additionally, fine-tuning on self-generated math task may enhance the model's math performance, highlighting the potential of the framework for efficient and insightful model evaluation and may also contribute to the improvement of LLMs.
Introduction
In recent years, large language models (LLMs) have reached groundbreaking milestones, significantly advancing in areas such as semantic understanding, sentence translation, and more <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|> <|cite_start|> (Reference: Llama 2: Open Foundation and Fine-Tuned Chat Models: In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.) <|cite_end|> <|cite_start|> (Reference: PaLM 2 Technical Report: We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report.) <|cite_end|> <|cite_start|> (Reference: Gemini: A Family of Highly Capable Multimodal Models: This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of the Gemini family in cross-modal reasoning and language understanding will enable a wide variety of use cases. We discuss our approach toward post-training and deploying Gemini models responsibly to users through services including Gemini, Gemini Advanced, Google AI Studio, and Cloud Vertex AI.) <|cite_end|>. These models not only facilitate enhanced interaction between computers and human language but also drive innovation across numerous applications. However, as these models become increasingly central to technological advancements and their applications more widespread, it is crucial to establish robust, systematic evaluation frameworks. Such frameworks are essential not only for understanding the full spectrum of capabilities these models possess but also for identifying their limitations and potential biases.
The evaluation of large language models has made significant strides in recent years, with researchers developing numerous benchmarks aimed at testing various aspects of model performance <|cite_start|> (Reference: Measuring Massive Multitask Language Understanding: We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.) <|cite_end|> <|cite_start|> (Reference: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models: Evaluating the general abilities of foundation models to tackle human-level tasks is a vital aspect of their development and application in the pursuit of Artificial General Intelligence (AGI). Traditional benchmarks, which rely on artificial datasets, may not accurately represent human-level capabilities. In this paper, we introduce AGIEval, a novel benchmark specifically designed to assess foundation model in the context of human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We evaluate several state-of-the-art foundation models, including GPT-4, ChatGPT, and Text-Davinci-003, using this benchmark. Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92.5% accuracy on the English test of the Chinese national college entrance exam. This demonstrates the extraordinary performance of contemporary foundation models. In contrast, we also find that GPT-4 is less proficient in tasks that require complex reasoning or specific domain knowledge. Our comprehensive analyses of model capabilities (understanding, knowledge, reasoning, and calculation) reveal these models' strengths and limitations, providing valuable insights into future directions for enhancing their general capabilities. By concentrating on tasks pertinent to human cognition and decision-making, our benchmark delivers a more meaningful and robust evaluation of foundation models' performance in real-world scenarios. The data, code, and all model outputs are released in https://github.com/ruixiangcui/AGIEval.) <|cite_end|>. However, the current evaluation methods still have notable shortcomings. Firstly, most benchmarks require substantial human and material resources and often necessitate the involvement of domain experts to accurately assess correctness. Secondly, evaluations that measure a large model's capability through self-evaluation of its own knowledge is less explored. This gap highlights the need for developing more efficient and insightful evaluation techniques that not only reduce the dependency on extensive resources but also enhance the models' ability to evaluate their own performance and limitations.
Motivated by Richard Feynman's famous quote: ``What I cannot create, I do not understand.''. We would like to evaluate the large language model's capability through its ``reverse version'', i.e. does the model really understand the questions and solutions created by itself?, which we termed the self-knowledge of the model. This capability is effectively realized by a \emph{truthful} human, since the originator of a question and its corresponding answer should be able to respond consistently and without difficulty if asked the same question by others if they truly comprehend this knowledge. This ease comes naturally from being the initial creator of the question, so when evaluated on a benchmark generated in this way, a self-knowledgable model should receive an accuracy of nearly $100 \%$ easily.
In this paper, we provide a novel framework that can evaluate the model's self-knowledge ability and is very \textbf{easy to implement.} We conduct an extensive evaluation of 7 popular LLMs across 9 tasks, including counting words, math, theorem proving, etc. We also conduct evaluation on large multi-modal models (LMMs). We summarize some of our findings as follows:
\begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=0.5cm]
\item We find that modern LLMs and LMMs have unsatisfactory behaviors on self-knowledge evaluations, which is far from perfect.
\item By analyzing a designated word counting task, we find that models become much similar to the human-inspired attention-based mechanisms when the model gets a higher self-knowledge score. The poor self-knowledge task performance may be explained by \emph{additive effect} of misalignment with this attention-based mechanism and the less-concentrates of LLM attention than humans.
\item We find only GPT-4 and Gemma achieve $100 \%$ accuracy when the question-generating process is given in context and their accuracy is reduced when the context is added with noisy contents. GPT-4 has accuracy less reduced than Gemma, making GPT-4 has more similar behaviour like humans than other models.
\item We find that fine-tuning the data generated by the self-knowledge math task may improve the performance on GSM-8k.
\item We find that expert-based prompts may usually improve self-knowledge ability but chain-of-thought prompting may usually not.
\end{itemize}
Related Work
\textbf{Evaluation of large generative models.}
Recent years have seen significant advancements in the development of large generative models, including large vision models (LVMs) <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> <|cite_start|> (Reference: Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.) <|cite_end|>, large language models (LLMs) <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|> <|cite_start|> (Reference: Llama 2: Open Foundation and Fine-Tuned Chat Models: In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.) <|cite_end|> <|cite_start|> (Reference: Qwen Technical Report: Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.) <|cite_end|> <|cite_start|> (Reference: Gemma: Open Models Based on Gemini Research and Technology: This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models. Gemma models demonstrate strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations.) <|cite_end|> <|cite_start|> (Reference: Mistral 7B: We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.) <|cite_end|>, and their evolution into large multi-modal models (LMMs) <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|> <|cite_start|> (Reference: Visual Instruction Tuning: Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.) <|cite_end|> <|cite_start|> (Reference: MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models: The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undisclosed. We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer. Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on. In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation). To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.) <|cite_end|> <|cite_start|> (Reference: InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4: Multimodal large language models are typically trained in two stages: first pre-training on image-text pairs, and then fine-tuning using supervised vision-language instruction data. Recent studies have shown that large language models can achieve satisfactory results even with a limited amount of high-quality instruction-following data. In this paper, we introduce InstructionGPT-4, which is fine-tuned on a small dataset comprising only 200 examples, amounting to approximately 6\% of the instruction-following data used in the alignment dataset for MiniGPT-4. To achieve this, we first propose several metrics to access the quality of multimodal instruction data. Based on these metrics, we present an effective and trainable data selector to automatically identify and filter low-quality vision-language data. By employing this method, InstructionGPT-4 outperforms the original MiniGPT-4 on various evaluations. Overall, our findings demonstrate that less but high-quality instruction tuning data is efficient in enabling multimodal large language models to generate better output. Our code is available at https://github.com/waltonfuture/InstructionGPT-4.) <|cite_end|> <|cite_start|> (Reference: Generating Images with Multimodal Language Models: We propose a method to fuse frozen text-only large language models (LLMs) with pre-trained image encoder and decoder models, by mapping between their embedding spaces. Our model demonstrates a wide suite of multimodal capabilities: image retrieval, novel image generation, and multimodal dialogue. Ours is the first approach capable of conditioning on arbitrarily interleaved image and text inputs to generate coherent image (and text) outputs. To achieve strong performance on image generation, we propose an efficient mapping network to ground the LLM to an off-the-shelf text-to-image generation model. This mapping network translates hidden representations of text into the embedding space of the visual models, enabling us to leverage the strong text representations of the LLM for visual outputs. Our approach outperforms baseline generation models on tasks with longer and more complex language. In addition to novel image generation, our model is also capable of image retrieval from a prespecified dataset, and decides whether to retrieve or generate at inference time. This is done with a learnt decision module which conditions on the hidden representations of the LLM. Our model exhibits a wider range of capabilities compared to prior multimodal language models. It can process image-and-text inputs, and produce retrieved images, generated images, and generated text -- outperforming non-LLM based generation models across several text-to-image tasks that measure context dependence.) <|cite_end|> <|cite_start|> (Reference: Making LLaMA SEE and Draw with SEED Tokenizer: The great success of Large Language Models (LLMs) has expanded the potential of multimodality, contributing to the gradual evolution of General Artificial Intelligence (AGI). A true AGI agent should not only possess the capability to perform predefined multi-tasks but also exhibit emergent abilities in an open-world context. However, despite the considerable advancements made by recent multimodal LLMs, they still fall short in effectively unifying comprehension and generation tasks, let alone open-world emergent abilities. We contend that the key to overcoming the present impasse lies in enabling text and images to be represented and processed interchangeably within a unified autoregressive Transformer. To this end, we introduce SEED, an elaborate image tokenizer that empowers LLMs with the ability to SEE and Draw at the same time. We identify two crucial design principles: (1) Image tokens should be independent of 2D physical patch positions and instead be produced with a 1D causal dependency, exhibiting intrinsic interdependence that aligns with the left-to-right autoregressive prediction mechanism in LLMs. (2) Image tokens should capture high-level semantics consistent with the degree of semantic abstraction in words, and be optimized for both discriminativeness and reconstruction during the tokenizer training phase. With SEED tokens, LLM is able to perform scalable multimodal autoregression under its original training recipe, i.e., next-word prediction. SEED-LLaMA is therefore produced by large-scale pretraining and instruction tuning on the interleaved textual and visual data, demonstrating impressive performance on a broad range of multimodal comprehension and generation tasks. More importantly, SEED-LLaMA has exhibited compositional emergent abilities such as multi-turn in-context multimodal generation, acting like your AI assistant.) <|cite_end|>, demonstrating near-human proficiency and even a spark of AGI. Evaluation of these large generative models is a fast-evolving field across various tasks, datasets, and benchmarks <|cite_start|> (Reference: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models: Evaluating the general abilities of foundation models to tackle human-level tasks is a vital aspect of their development and application in the pursuit of Artificial General Intelligence (AGI). Traditional benchmarks, which rely on artificial datasets, may not accurately represent human-level capabilities. In this paper, we introduce AGIEval, a novel benchmark specifically designed to assess foundation model in the context of human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We evaluate several state-of-the-art foundation models, including GPT-4, ChatGPT, and Text-Davinci-003, using this benchmark. Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92.5% accuracy on the English test of the Chinese national college entrance exam. This demonstrates the extraordinary performance of contemporary foundation models. In contrast, we also find that GPT-4 is less proficient in tasks that require complex reasoning or specific domain knowledge. Our comprehensive analyses of model capabilities (understanding, knowledge, reasoning, and calculation) reveal these models' strengths and limitations, providing valuable insights into future directions for enhancing their general capabilities. By concentrating on tasks pertinent to human cognition and decision-making, our benchmark delivers a more meaningful and robust evaluation of foundation models' performance in real-world scenarios. The data, code, and all model outputs are released in https://github.com/ruixiangcui/AGIEval.) <|cite_end|> <|cite_start|> (Reference: MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI: We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering. These questions span 30 subjects and 183 subfields, comprising 30 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures. Unlike existing benchmarks, MMMU focuses on advanced perception and reasoning with domain-specific knowledge, challenging models to perform tasks akin to those faced by experts. The evaluation of 14 open-source LMMs as well as the proprietary GPT-4V(ision) and Gemini highlights the substantial challenges posed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieve accuracies of 56% and 59% respectively, indicating significant room for improvement. We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence.) <|cite_end|> <|cite_start|> (Reference: MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models: Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization. The data application manner and online leaderboards are released at https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation.) <|cite_end|> <|cite_start|> (Reference: Large Language Model Evaluation via Matrix Entropy: Large language models (LLMs) have revolutionized the field of natural language processing, extending their strong capabilities into multi-modal domains. Thus, it is vital to define proper and diversified metrics for the evaluation of LLMs. In this paper, we introduce matrix entropy, a novel metric rooted in information theory and geometry principles to quantify the data compression proficiency in LLMs. It reflects the model's ability to extract relevant information and eliminate unnecessary elements, thereby providing insight into the language model's intrinsic capability. Specifically, we demonstrate its applicability in both single-modal (language) and multi-modal settings. For language models, our findings reveal that the matrix entropy of representations follows a scaling law type reduction when the model scales up, serving as a complement to the traditional loss scaling law. For the multi-modal setting, we also propose an evaluation method based on matrix entropy for assessing alignment quality and we find that modern large multi-modal models exhibit great alignment performance.) <|cite_end|> <|cite_start|> (Reference: TrustLLM: Trustworthiness in Large Language Models: Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. Our findings firstly show that in general trustworthiness and utility (i.e., functional effectiveness) are positively related. Secondly, our observations reveal that proprietary LLMs generally outperform most open-source counterparts in terms of trustworthiness, raising concerns about the potential risks of widely accessible open-source LLMs. However, a few open-source LLMs come very close to proprietary ones. Thirdly, it is important to note that some LLMs may be overly calibrated towards exhibiting trustworthiness, to the extent that they compromise their utility by mistakenly treating benign prompts as harmful and consequently not responding. Finally, we emphasize the importance of ensuring transparency not only in the models themselves but also in the technologies that underpin trustworthiness. Knowing the specific trustworthy technologies that have been employed is crucial for analyzing their effectiveness.) <|cite_end|>. It encompasses a wide range of domains, including the generation of language, images, videos, and audio. However, there is a lack of evaluations that measure a large generative model’s self-knowledge of its own capabilities. Specifically, we focus on the self-knowledge evaluation of LLMs that can understand instruction and output responses, as well as LMMs that can both understand images and generate images.
\textbf{Evaluation of LLM's instruction-following ability.}
Several studies have established benchmarks for evaluating LLMs’
instruction-following abilities. <|cite_start|> (Reference: FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models: The ability to follow instructions is crucial for Large Language Models (LLMs) to handle various real-world applications. Existing benchmarks primarily focus on evaluating pure response quality, rather than assessing whether the response follows constraints stated in the instruction. To fill this research gap, in this paper, we propose FollowBench, a Multi-level Fine-grained Constraints Following Benchmark for LLMs. FollowBench comprehensively includes five different types (i.e., Content, Situation, Style, Format, and Example) of fine-grained constraints. To enable a precise constraint following estimation on diverse difficulties, we introduce a Multi-level mechanism that incrementally adds a single constraint to the initial instruction at each increased level. To assess whether LLMs' outputs have satisfied every individual constraint, we propose to prompt strong LLMs with constraint-evolution paths to handle challenging open-ended instructions. By evaluating 13 closed-source and open-source popular LLMs on FollowBench, we highlight the weaknesses of LLMs in instruction following and point towards potential avenues for future work. The data and code are publicly available at https://github.com/YJiangcm/FollowBench.) <|cite_end|> proposed FollowBench that sequentially add fine-grained constraints to construct multi-level instructions. <|cite_start|> (Reference: Instruction-Following Evaluation for Large Language Models: One core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensive, slow, and not objectively reproducible, while LLM-based auto-evaluation is potentially biased or limited by the ability of the evaluator LLM. To overcome these issues, we introduce Instruction-Following Eval (IFEval) for large language models. IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of "verifiable instructions" such as "write in more than 400 words" and "mention the keyword of AI at least 3 times". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions. We show evaluation results of two widely available LLMs on the market. Our code and data can be found at https://github.com/google-research/google-research/tree/master/instruction_following_eval) <|cite_end|> emphasized objective evaluations with verifiable instructions.
Meanwhile, <|cite_start|> (Reference: InFoBench: Evaluating Instruction Following Ability in Large Language Models: This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions. Addressing a gap in current methodologies, DRFR breaks down complex instructions into simpler criteria, facilitating a detailed analysis of LLMs' compliance with various aspects of tasks. Alongside this metric, we present InFoBench, a benchmark comprising 500 diverse instructions and 2,250 decomposed questions across multiple constraint categories. Our experiments compare DRFR with traditional scoring methods and explore annotation sources, including human experts, crowd-sourced workers, and GPT-4. The findings demonstrate DRFR's higher reliability and the effectiveness of using GPT-4 as a cost-efficient annotator. The evaluation of several advanced LLMs using this framework reveals their strengths and areas needing improvement, particularly in complex instruction-following. This study contributes a novel metric and benchmark, offering insights for future LLM development and evaluation.) <|cite_end|> constructed a benchmark composed of several distinct instructions and decomposed questions for the assessment of the instruction following.
These benchmarks require manually constructing a large number of instructions and answers. Differently, our work mainly focuses on the large model’s self-knowledge of its own capabilities, which is also independent of collecting additional annotated answers. <|paper_end|> | [
"<|reference_start|> Gemini: A Family of Highly Capable Multimodal Models: This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of the Gemini family in cross-modal reasoning and language understanding will enable a wide variety of use cases. We discuss our approach toward post-training and deploying Gemini models responsibly to users through services including Gemini, Gemini Advanced, Google AI Studio, and Cloud Vertex AI. <|reference_end|>",
"<|reference_start|> Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP. <|reference_end|>",
"<|reference_start|> GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4. <|reference_end|>",
"<|reference_start|> InFoBench: Evaluating Instruction Following Ability in Large Language Models: This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions. Addressing a gap in current methodologies, DRFR breaks down complex instructions into simpler criteria, facilitating a detailed analysis of LLMs' compliance with various aspects of tasks. Alongside this metric, we present InFoBench, a benchmark comprising 500 diverse instructions and 2,250 decomposed questions across multiple constraint categories. Our experiments compare DRFR with traditional scoring methods and explore annotation sources, including human experts, crowd-sourced workers, and GPT-4. The findings demonstrate DRFR's higher reliability and the effectiveness of using GPT-4 as a cost-efficient annotator. The evaluation of several advanced LLMs using this framework reveals their strengths and areas needing improvement, particularly in complex instruction-following. This study contributes a novel metric and benchmark, offering insights for future LLM development and evaluation. <|reference_end|>"
] | [
3,
6,
8,
26
] | {"<|multi_cite_1_1|>": "arxiv-489148", "<|multi_cite_1_2|>": "arxiv-524224", "<|multi_cite_1_3|>": "arxiv-505787", "<|multi_cite_1_4|>": "arxiv-569452", "<|multi_cite_9_1|>": "arxiv-288666", "<|multi_cite_9_3|>": "arxiv-496723", "<|multi_cite_2_1|>": "arxiv-323919", "<|multi_cite_2_2|>": "arxiv-494904", "<|multi_cite_3_1|>": "arxiv-489148", "<|multi_cite_3_2|>": "arxiv-524224", "<|multi_cite_3_3|>": "arxiv-543621", "<|multi_cite_3_4|>": "arxiv-595060", "<|multi_cite_3_5|>": "arxiv-547654", "<|multi_cite_4_1|>": "arxiv-489148", "<|multi_cite_4_2|>": "arxiv-497716", "<|multi_cite_4_3|>": "arxiv-498672", "<|multi_cite_4_4|>": "arxiv-533429", "<|multi_cite_4_5|>": "arxiv-509731", "<|multi_cite_4_6|>": "arxiv-544746", "<|multi_cite_5_1|>": "arxiv-496723", "<|multi_cite_5_2|>": "arxiv-562454", "<|multi_cite_5_3|>": "arxiv-518010", "<|multi_cite_5_4|>": "arxiv-580110", "<|multi_cite_5_5|>": "arxiv-574618", "<|cite_6|>": "arxiv-554416", "<|cite_7|>": "arxiv-558257", "<|cite_8|>": "arxiv-573800"} |
1101.5335 | <|paper_start|> Title: Accurate Performance Analysis of Opportunistic Decode-and-Forward Relaying
Abstract: Accurate Performance Analysis of Opportunistic Decode-and-Forward Relaying: In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures.
Introduction
\label{Intro}In many wireless applications, users may not be able to support multiple antennas due to size, complexity, power, or other constraints. The wireless medium brings along its unique challenges such as fading and multiuser interference. This can be mitigated with cooperative diversity <|cite_start|> (Reference: Distributed Space-Time-Coded Protocols for Exploiting Cooperative Diversity in Wireless Networks: We develop and analyze space-time coded cooperative diversity protocols for combating multipath fading across multiple protocol layers in a wireless network. The protocols exploit spatial diversity available among a collection of distributed terminals that relay messages for one another in such a manner that the destination terminal can average the fading, even though it is unknown a priori which terminals will be involved. In particular, a source initiates transmission to its destination, and many relays potentially receive the transmission. Those terminals that can fully decode the transmission utilize a space-time code to cooperatively relay to the destination. We demonstrate that these protocols achieve full spatial diversity in the number of cooperating terminals, not just the number of decoding relays, and can be used effectively for higher spectral efficiencies than repetition-based schemes. We discuss issues related to space-time code design for these protocols, emphasizing codes that readily allow for appealing distributed versions.) <|cite_end|> <|cite_start|> (Reference: User cooperation diversity, Part I : System description: Mobile users' data rate and quality of service are limited by the fact that, within the duration of any given call, they experience severe variations in signal attenuation, thereby necessitating the use of some type of diversity. In this two-part paper, we propose a new form of spatial diversity, in which diversity gains are achieved via the cooperation of mobile users. Part I describes the user cooperation strategy, while Part II (see ibid., p.1939-48) focuses on implementation issues and performance analysis. Results show that, even though the interuser channel is noisy, cooperation leads not only to an increase in capacity for both users but also to a more robust system, where users' achievable rates are less susceptible to channel variations.) <|cite_end|>, which is becoming very attractive for small-size, antenna-limited wireless devices. Opportunistic relaying (OR) technique has been proposed where only the best relay from a set of $K$ available candidate relays is selected to cooperate <|cite_start|> (Reference: A Simple Cooperative Diversity Method Based on Network Path Selection: Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme, that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this best relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M nodes is required, such as those proposed in [7]. The simplicity of the technique, allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability and efficiency in future 4G wireless systems.) <|cite_end|> <|cite_start|> (Reference: Full length article: Wireless transmission using cooperation on demand: ) <|cite_end|> <|cite_start|> (Reference: Max-min relay selection for legacy amplify-and-forward systems with interference: In this paper, an amplify-and-forward (AF) cooperative strategy for interference limited networks is considered. In contrast to previously reported work, where the effect of interference is ignored, the effect of multi-user interference in AF schemes is analyzed. It is shown that the interference changes the statistical description of the conventional AF protocol and a statistical expression is subsequently derived. Asymptotic analysis of the expression shows that interference limits the diversity gain of the system and the related channel capacity is bounded by a stationary point. In addition, it is proven that previously proposed relay selection criteria for multi-relay scenarios become inefficient in the presence of interference. Based on consideration of the interference term, two extensions to the conventional max-min selection scheme suitable for different system setups are proposed. The extensions investigated are appropriate for legacy architectures with limitations on their flexibility where the max-min operation is pre-designed. A theoretical framework for selecting when to apply the proposed selection criteria is also presented. The algorithm investigated is based on some welldefined capacity approximations and incorporates the outage probabilities averaged over the fading statistics. Analytical results and simulation studies reveal enhancements of the proposed algorithm.) <|cite_end|> <|cite_start|> (Reference: Interference-limited opportunistic relaying with reactive sensing: This work evaluates opportunistic relaying in the presence of thermal noise as well as interference, when channel sensing is conducted reactively, in slow fading environments. The studied scenario employs a single gateway that provides access towards several destinations with weak links and exploits a network of intermediate relays. In sharp contrast to prior art, no inter-relay channel state information or communication is assumed, no network coding is needed, while low-complexity receivers at each destination are employed. It is shown that information can be relayed without delay, while harvesting benefits of cooperative diversity, even at the presence of interference. The participating relays are required to offer strong paths towards source and destination, while at the same time they are as "isolated" as possible from each other. From that perspective, the notion of relay "usefulness" is redefined in both noise and interference-limited environments, under opportunistic relaying.) <|cite_end|>. With this technique, the selection strategy is to choose the relay with the best equivalent end-to-end channel gain which is calculated as the minimum of the channel gains of the first and the second hops under decode-and-forward (DF) protocol or with the best harmonic mean of both channel gains under amplify-and-forward (AF) protocol. However, some works have chosen the best relay-destination link as possible selection criteria <|cite_start|> (Reference: Full length article: Wireless transmission using cooperation on demand: ) <|cite_end|> <|cite_start|> (Reference: Exact error probability and channel capacity of the best-relay cooperative-diversity networks: Cooperative diversity networks have recently been proposed as a way to form virtual antenna arrays without using collocated multiple antennas. In this paper, we consider adaptive decode-and-forward cooperative diversity system where a source node communicates with a destination node directly and indirectly (through multiple relays). In this letter, we investigate the performance of the best-relay selection scheme where the best relay only participates in the relaying. Therefore, two channels only are needed in this case (one for the direct link and the other one for the best indirect link) regardless of the total number of relays. The best relay is selected as the relay node that can achieve the highest signal-to-noise ratio at the destination node. We developed a general analytical model to analyze the performance of the adaptive decode-and-forward cooperative networks with best-relay selection. In particular, exact closed-form expressions for the error probability and Shannon capacity are derived over independent and nonidentical Rayleigh fading channels. Results show that the best-relay selection not only reduces the number of required channels but also can maintain a full diversity order.) <|cite_end|>.\\
Previous works have largely focused on information theoretic aspects of OR and derived outage performance results of such systems. Some of these analysis are accurate only at high signal to noise ratio (SNR) <|cite_start|> (Reference: Cooperative diversity with opportunistic relaying: In this paper, we present single-selection-opportunistic-relaying with decode-and-forward (DaF) and amplify-and-forward (AaF) protocols under an aggregate power constraint. We show that opportunistic DaF relaying is equivalent to the outage bound of the optimal DaF strategy using all potential relays. We further show that opportunistic AaF relaying is outage-optimal with single-relay selection and significantly outperforms an AaF strategy with multiple-relay (MR) transmissions, in the presence of limited channel knowledge. These findings reveal that cooperative diversity benefits (under an aggregate power constraint) are useful even when cooperative relays choose not to transmit but rather choose to cooperatively listen; they act as passive relays and give priority to the transmission of a single opportunistic relay) <|cite_end|> <|cite_start|> (Reference: On the Design of Opportunistic Cooperative Transmission Strategies with Partial CSI Information: The aim of this paper is to study the impact of channel state information on the design of cooperative transmission protocols. This is motivated by the fact that the performance gain achieved by cooperative diversity comes at the price of the extra bandwidth resource consumption. Several opportunistic relaying strategies are developed to fully utilize the different types of a priori channel information. The information-theoretic measures such as outage probability and diversity-multiplexing tradeoff are developed for the proposed protocols. The analytical and numerical results demonstrate that the use of such a priori information increases the spectral efficiency of cooperative diversity, especially at low signal-to-noise ratio.) <|cite_end|> <|cite_start|> (Reference: Opportunistic cooperative diversity with feedback and cheap radios: Practical cooperative diversity protocols often rely on low-cost radios that treat multiple in-band signals as noise and thus require strictly orthogonal transmissions. We analyze the performance of a class of opportunistic relaying protocols that employ simple packet level feedback and strictly orthogonal transmissions. It is shown that the diversity-multiplexing tradeoff of the proposed protocols either matches or outperforms the multi-input-single-output (MISO), zero-feedback performance. These gains indicate that low complexity radios and feedback could be an appealing architecture for future user cooperation protocols.) <|cite_end|> <|cite_start|> (Reference: Asymptotic analysis of opportunistic relaying protocols: In this letter, we examine in detail a cooperative network with multiple relays. We investigate protocols that incorporate the opportunistic relaying technique, which selects the "best" relay among the M available relays. We evaluate the asymptotic outage performance of incremental amplify-and-forward (IAF) when it is extended to the opportunistic relaying scenario. Moreover, we propose two new protocols, namely opportunistic incremental selection AF and opportunistic joint incremental selection relaying, and derive the corresponding asymptotic outage probabilities. Finally, we compare the analytical asymptotic outage probabilities and the simulated ones. We conclude that the OJISR protocol outperforms the other protocols.) <|cite_end|>. Particularly in <|cite_start|> (Reference: Cooperative diversity with opportunistic relaying: In this paper, we present single-selection-opportunistic-relaying with decode-and-forward (DaF) and amplify-and-forward (AaF) protocols under an aggregate power constraint. We show that opportunistic DaF relaying is equivalent to the outage bound of the optimal DaF strategy using all potential relays. We further show that opportunistic AaF relaying is outage-optimal with single-relay selection and significantly outperforms an AaF strategy with multiple-relay (MR) transmissions, in the presence of limited channel knowledge. These findings reveal that cooperative diversity benefits (under an aggregate power constraint) are useful even when cooperative relays choose not to transmit but rather choose to cooperatively listen; they act as passive relays and give priority to the transmission of a single opportunistic relay) <|cite_end|>, the end-to-end outage probability analysis of opportunistic relaying without direct link between source and destination nodes was presented. In addition, several works have considered the OR scheme under DF protocol in Rayleigh fading environment, where only the upper bound for the statistics of the best relay local SNR\footnote{The statistic refers to the probability density function (PDF) of the received SNR at the destination, called $\gamma_{r_*d}$, from the best relay $r_*$.} was obtained <|cite_start|> (Reference: Selective relaying in cooperative OFDM systems: two-hop random network: In this paper, we investigate two selective relaying schemes in cooperative OFDM systems. Selective OFDMA relaying, where the relay selection is performed in a per-subcarrier manner, and selective OFDM relaying, where one best relay among the L potential relays is selected to relay the entire OFDM block, are compared in a two-hop random network. The outage performance of equal bit allocation (EBA), where each subchannel has the same number of bits, and bit loading (BL), where bits are adaptively allocated to each subchannel, are analyzed and compared for these two approaches. The outage analysis clearly shows that a significant performance gain can be achieved by selective OFDMA relaying, whether EBA or BL is employed, compared with selective OFDM relaying. The performance gain remains the same for different relay locations. With EBA, the performance gain increases with an increase in L and N, the number of independent subchannels. For BL, the performance gain also increases with an increase in R, the average number of bits per subchannel, in addition to L and N. Centralized and decentralized implementation issues are also considered. For EBA, selective OFDMA relaying scheme is preferred because of its superior performance and simple decentralized implementation. For BL, selective OFDMA relaying scheme is a good choice for centralized systems and selective OFDM relaying is more suitable for decentralized systems at the expense of a loss in performance.) <|cite_end|> <|cite_start|> (Reference: On the Performance of Selection Relaying: Interest in selection relaying is growing. The recent developments in this area have largely focused on information theoretic analyses such as outage performance. Some of these analyses are accurate only at high SNR regimes. In this paper error rate analyses that are sufficiently accurate over a wide range of SNR regimes are provided. The motivations for this work are that practical systems operate at far lower SNR values than those supported by the high SNR analysis. To enable designers to make informed decisions regarding network design and deployment, it is imperative that system performance is evaluated with a reasonable degree of accuracy over practical SNR regimes. Simulations have been used to corroborate the analytical results, as close agreement between the two is observed.) <|cite_end|>. Moreover, performance analysis of single relay selection for DF protocols were proposed in <|cite_start|> (Reference: Performance analysis of single relay selection in Rayleigh fading: We provide closed-form expressions for the outage and bit error probability (BEP) of uncoded, threshold-based opportunistic relaying (OR) and selection cooperation (SC), at arbitrary signal to noise ratios (SNRs) and number of available relays, assuming decode-and-forward relays and Rayleigh fading channels. Numerical results demonstrate that SC performs slightly better in terms of outage probability; in terms of BEP, both systems may outperform one another, depending on the SNR threshold that determines the set of relays that participate in the forwarding process.) <|cite_end|> <|cite_start|> (Reference: On relay selection for decode-and-forward relaying: In this letter, we consider a multi-relay network operating in decode-and-forward mode. We propose a novel relay selection method with a low implementation complexity. Unlike the competing schemes, it requires neither error detection methods at relay nodes nor feedback information at the source. We derive a closed-form symbol error rate (SER) expression for multi-relay network under consideration and demonstrate that the proposed selection method is able to extract the full diversity. Extensive Monte Carlo simulations are also presented to confirm the derived SER expressions and to compare the performance of the proposed scheme with its competitors.) <|cite_end|> <|cite_start|> (Reference: Exact closed-form expressions for the outage probability and ergodic capacity of decode-and-forward opportunistic relaying: Exact statistics of the local signal-to-noise ratios (SNRs) of the best relay in decode-and-forward (DF) opportunistic relaying (ORe) are derived. It is observed that although the different links are assumed to suffer independent fadings, the best-relay local SNRs are dependent. Both joint and marginal statistics are determined for the general case of nonidentical SNR distributions, and a source-relay-symmetric (S-R-sym.), relay-destination-symmetric (R-D-sym.) case. Both general fading and Rayleigh fading cases are considered. Using the statistics derived, exact, closed-form expressions for the outage probability and ergodic capacity of DF ORe are calculated in the S-R-sym., RD-sym., Rayleigh fading case. The exact results for the outage probability show almost linearly increasing diversity order with the number of relays. The exact results for the ergodic capacity show a multiplexing gain almost equaling one half and a power gain increasing with the number of relays that exhibits diminishing returns.) <|cite_end|>. In <|cite_start|> (Reference: Performance analysis of single relay selection in Rayleigh fading: We provide closed-form expressions for the outage and bit error probability (BEP) of uncoded, threshold-based opportunistic relaying (OR) and selection cooperation (SC), at arbitrary signal to noise ratios (SNRs) and number of available relays, assuming decode-and-forward relays and Rayleigh fading channels. Numerical results demonstrate that SC performs slightly better in terms of outage probability; in terms of BEP, both systems may outperform one another, depending on the SNR threshold that determines the set of relays that participate in the forwarding process.) <|cite_end|>, Michalopoulos and Karagiannidis proposed closed-form expressions for the outage and bit error probability (BEP). However, the activated relay is selected from a decoding set, so that the input signal-to-noise ratio (SNR) is compared to a threshold before forwarding, and the diversity order was not derived explicitly. In <|cite_start|> (Reference: On relay selection for decode-and-forward relaying: In this letter, we consider a multi-relay network operating in decode-and-forward mode. We propose a novel relay selection method with a low implementation complexity. Unlike the competing schemes, it requires neither error detection methods at relay nodes nor feedback information at the source. We derive a closed-form symbol error rate (SER) expression for multi-relay network under consideration and demonstrate that the proposed selection method is able to extract the full diversity. Extensive Monte Carlo simulations are also presented to confirm the derived SER expressions and to compare the performance of the proposed scheme with its competitors.) <|cite_end|>, Fareed and Uysal considered a relay selection method in a DF multi-relay network where the selected relay cooperates only if the SNR of the source-destination (direct) link is less than the minimum of the channel gains of the first and the second hops. The authors proposed an approximated closed-form symbol error rate (SER) expression. Recently, Nikjah and Beaulieu in <|cite_start|> (Reference: Exact closed-form expressions for the outage probability and ergodic capacity of decode-and-forward opportunistic relaying: Exact statistics of the local signal-to-noise ratios (SNRs) of the best relay in decode-and-forward (DF) opportunistic relaying (ORe) are derived. It is observed that although the different links are assumed to suffer independent fadings, the best-relay local SNRs are dependent. Both joint and marginal statistics are determined for the general case of nonidentical SNR distributions, and a source-relay-symmetric (S-R-sym.), relay-destination-symmetric (R-D-sym.) case. Both general fading and Rayleigh fading cases are considered. Using the statistics derived, exact, closed-form expressions for the outage probability and ergodic capacity of DF ORe are calculated in the S-R-sym., RD-sym., Rayleigh fading case. The exact results for the outage probability show almost linearly increasing diversity order with the number of relays. The exact results for the ergodic capacity show a multiplexing gain almost equaling one half and a power gain increasing with the number of relays that exhibits diminishing returns.) <|cite_end|> offered the first exact performance analysis of opportunistic DF relaying. However <|cite_start|> (Reference: Exact closed-form expressions for the outage probability and ergodic capacity of decode-and-forward opportunistic relaying: Exact statistics of the local signal-to-noise ratios (SNRs) of the best relay in decode-and-forward (DF) opportunistic relaying (ORe) are derived. It is observed that although the different links are assumed to suffer independent fadings, the best-relay local SNRs are dependent. Both joint and marginal statistics are determined for the general case of nonidentical SNR distributions, and a source-relay-symmetric (S-R-sym.), relay-destination-symmetric (R-D-sym.) case. Both general fading and Rayleigh fading cases are considered. Using the statistics derived, exact, closed-form expressions for the outage probability and ergodic capacity of DF ORe are calculated in the S-R-sym., RD-sym., Rayleigh fading case. The exact results for the outage probability show almost linearly increasing diversity order with the number of relays. The exact results for the ergodic capacity show a multiplexing gain almost equaling one half and a power gain increasing with the number of relays that exhibits diminishing returns.) <|cite_end|> focused on outage probability and ergodic capacity performance metrics and the end results were expressed in integral forms. However, in , Chen~\textit{\textit{e}t. al} derived only approximate symbol error probability (SEP) expression in integral form for opportunistic DF relaying.
\subsection{Contributions of this Paper}
\label{Contri}In this paper we consider a half duplex DF-based cooperative two-hop communications where an opportunistic relaying problem is considered. We state that the objective of this paper is not to revisit path selection, but to focus on giving valid accurate analysis over all SNR regimes. In fact, we determine the exact closed-form expressions of the end-to-end bit error rate (BER) where the source may or may not be able to communicate directly with the destination due to the shadowing. In particular, we consider the important effect of the possible erroneously detected and transmitted data at the regenerative relay. Our analytical approach requires that we determine the probability density function (PDF) of the received SNR by and from the selected relay, called $\gamma_{sr_*}$ and $\gamma_{r_*d}$, respectively. To the best of our knowledge, such performance analysis based on exact statistics (explicit form) of each hop has not been considered in the literature, and using the newly derived exact statistics, we investigate the asymptotic error performance and find the diversity order of these systems.
\subsection{Organization of this Paper}
The remainder of this paper is organized as follows. In section~\ref{systmodel}, we introduce the system model and the statistics of each hop. In section~\ref{Perf}, the accurate closed form for the end-to-end BER is derived and the diversity order of each scheme is determined. Finally, the simulation results for symmetric and linear networks are depicted in section~\ref{simulations} while some concluding remarks are given in section~\ref{conclu}. <|paper_end|> | [
"<|reference_start|> Distributed Space-Time-Coded Protocols for Exploiting Cooperative Diversity in Wireless Networks: We develop and analyze space-time coded cooperative diversity protocols for combating multipath fading across multiple protocol layers in a wireless network. The protocols exploit spatial diversity available among a collection of distributed terminals that relay messages for one another in such a manner that the destination terminal can average the fading, even though it is unknown a priori which terminals will be involved. In particular, a source initiates transmission to its destination, and many relays potentially receive the transmission. Those terminals that can fully decode the transmission utilize a space-time code to cooperatively relay to the destination. We demonstrate that these protocols achieve full spatial diversity in the number of cooperating terminals, not just the number of decoding relays, and can be used effectively for higher spectral efficiencies than repetition-based schemes. We discuss issues related to space-time code design for these protocols, emphasizing codes that readily allow for appealing distributed versions. <|reference_end|>",
"<|reference_start|> A Simple Cooperative Diversity Method Based on Network Path Selection: Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme, that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this best relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M nodes is required, such as those proposed in [7]. The simplicity of the technique, allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability and efficiency in future 4G wireless systems. <|reference_end|>",
"<|reference_start|> Full length article: Wireless transmission using cooperation on demand: <|reference_end|>",
"<|reference_start|> Performance analysis of single relay selection in Rayleigh fading: We provide closed-form expressions for the outage and bit error probability (BEP) of uncoded, threshold-based opportunistic relaying (OR) and selection cooperation (SC), at arbitrary signal to noise ratios (SNRs) and number of available relays, assuming decode-and-forward relays and Rayleigh fading channels. Numerical results demonstrate that SC performs slightly better in terms of outage probability; in terms of BEP, both systems may outperform one another, depending on the SNR threshold that determines the set of relays that participate in the forwarding process. <|reference_end|>"
] | [
0,
2,
3,
15
] | {"<|multi_cite_1_1|>": "ss-1717874", "<|multi_cite_1_2|>": "ss-760607", "<|multi_cite_2_1|>": "arxiv-673442", "<|multi_cite_2_2|>": "ss-1030837", "<|multi_cite_2_3|>": "ss-990499", "<|multi_cite_2_4|>": "ss-1030838", "<|multi_cite_3_1|>": "ss-1030837", "<|multi_cite_3_2|>": "ss-1030451", "<|multi_cite_4_1|>": "ss-1716387", "<|multi_cite_4_2|>": "ss-1030839", "<|multi_cite_4_3|>": "ss-1716388", "<|multi_cite_4_4|>": "ss-1030840", "<|cite_5|>": "ss-1716387", "<|multi_cite_6_1|>": "ss-1015928", "<|multi_cite_6_2|>": "arxiv-4350", "<|multi_cite_7_1|>": "ss-1019100", "<|multi_cite_7_2|>": "ss-1024830", "<|multi_cite_7_3|>": "ss-2272286", "<|cite_8|>": "ss-1019100", "<|cite_9|>": "ss-1024830", "<|cite_10|>": "ss-2272286", "<|cite_11|>": "ss-2272286"} |
1906.03504 | <|paper_start|> Title: Convolutional Bipartite Attractor Networks
Abstract: Convolutional Bipartite Attractor Networks: In human perception and cognition, a fundamental operation that brains perform is interpretation: constructing coherent neural states from noisy, incomplete, and intrinsically ambiguous evidence. The problem of interpretation is well matched to an early and often overlooked architecture, the attractor network---a recurrent neural net that performs constraint satisfaction, imputation of missing features, and clean up of noisy data via energy minimization dynamics. We revisit attractor nets in light of modern deep learning methods and propose a convolutional bipartite architecture with a novel training loss, activation function, and connectivity constraints. We tackle larger problems than have been previously explored with attractor nets and demonstrate their potential for image completion and super-resolution. We argue that this architecture is better motivated than ever-deeper feedforward models and is a viable alternative to more costly sampling-based generative methods on a range of supervised and unsupervised tasks.
Introduction
\vspace{-.04in}
Under ordinary conditions, human visual perception is quick
and accurate. Studying circumstances that give rise to slow or
inaccurate perception can help reveal the underlying mechanisms
of visual information processing. Recent investigations of
occluded <|cite_start|> (Reference: Recurrent computations for visual pattern completion: Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared to whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information.) <|cite_end|>
and empirically challenging scenes have led to the
conclusion that recurrent brain circuits can play a critical role in object recognition.
Further, recurrence can
improve the classification performance of deep nets <|cite_start|> (Reference: Recurrent computations for visual pattern completion: Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared to whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information.) <|cite_end|> <|cite_start|> (Reference: Task-Driven Convolutional Recurrent Models of the Visual System: Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain's visual system. However, biological visual systems have two ubiquitous architectural features not shared with typical CNNs: local recurrence within cortical areas, and long-range feedback from downstream areas to upstream areas. Here we explored the role of recurrence in improving classification performance. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet task. In contrast, novel cells that incorporated two structural features, bypassing and gating, were able to boost task accuracy substantially. We extended these design principles in an automated search over thousands of model architectures, which identified novel local recurrent cells and long-range feedback connections useful for object recognition. Moreover, these task-optimized ConvRNNs matched the dynamics of neural activity in the primate visual system better than feedforward networks, suggesting a role for the brain's recurrent connections in performing difficult visual behaviors.) <|cite_end|>, specifically for the same
images with which humans and animals have the most difficulty.
Recurrent dynamics allow the brain to perform \emph{pattern completion}, constructing a coherent neural state from noisy, incomplete, and intrinsically ambiguous evidence.
This interpretive process is
well matched to \emph{attractor networks} (\emph{ANs}) <|cite_start|> (Reference: Neural networks and physical systems with emergent collective
computational abilities: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.) <|cite_end|> <|cite_start|> (Reference: Neurons with graded response have collective computational properties like those of two-state neurons: A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied. This deterministic system has collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons. The content- addressable memory and other emergent collective properties of the original model also are present in the graded response model. The idea that such collective properties are used in biological systems is given added credence by the continued presence of such properties for more nearly biological "neurons." Collective analog electrical circuits of the kind described will certainly function. The collective states of the two models have a simple correspondence. The original model will continue to be useful for simulations, because its connection to graded response systems is established. Equations that include the effect of action potentials in the graded response system are also developed.) <|cite_end|> <|cite_start|> (Reference: Dense Associative Memory for Pattern Recognition: A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set.) <|cite_end|> <|cite_start|> (Reference: Localist attractor networks: Attractor networks, which map an input space to a discrete output space, are useful for pattern completioncleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).) <|cite_end|>,
a class of dynamical neural networks that converge to fixed-point attractor states (Figure~\ref{fig:1}a). Given evidence in the form of a static input, an AN settles to an asymptotic state---an interpretation or completion---that is as consistent as possible with the evidence and with implicit knowledge embodied in the network connectivity.
We show examples from our model in Figure~\ref{fig:1}b.
\begin{figure}[b!]
\centering
\includegraphics[height=1.in]{figures/figure1_v2.pdf}
\caption{(a) Hypothetical activation flow dynamics of an attractor net over a 2D state space; the contours depict an energy landscape.
(b) top-to-bottom: original image, completion, and evidence. (c) Bipartite architecture with layer update order. (d) Convolutional architecture with average pooling.}
\label{fig:1}
\end{figure}
ANs have played a pivotal role in characterizing computation in the brain <|cite_start|> (Reference: Modeling brain function: the world of attractor neural networks, 1st Edition: One of a good overview all the output neurons. The fixed point attractors have resulted in order to the attractor furthermore. As well as memory classification and all the basic ideas. Introducing the form of strange attractors or licence agreement may be fixed point! The above with input produces and the techniques brought from one of cognitive processes. The study of cpgs is the, global dynamics as nearest neighbor classifiers. Attractor networks encode knowledge of the, network will be ergodic so. These synapses will be applicable exploring one interesting and neural networks other technology professionals.) <|cite_end|> <|cite_start|> (Reference: An interactive activation model of context effects in letter perception: I. An account of basic findings.: ) <|cite_end|>, not only
perception \citep[e.g.,][]{Sterzer2007}, but also language <|cite_start|> (Reference: The sentence wrap-up dogma: ) <|cite_end|> and awareness <|cite_start|> (Reference: Attractor networks: Artificial neural networks (ANNs), sometimes referred to as connectionist networks, are computational models based loosely on the neural architecture of the brain. Over the past twenty years, ANNs have proven to be a fruitful framework for modeling many aspects of cognition, including perception, attention, learning and memory, language, and executive control. A particular type of ANN, called an attractor network, is central to computational theories of consciousness, because attractor networks can be analyzed in terms of properties—such as temporal stability, and strength, quality, and discreteness of representation— that have been ascribed to conscious states. Some theories have gone so far as to posit that attractor nets are the computational substrate from which conscious states arise.) <|cite_end|>.
We revisit attractor nets in light of modern deep learning methods and propose a convolutional bipartite architecture
for pattern completion tasks with a novel training loss, activation function, and connectivity constraints.
\vspace{-.07in} <|paper_end|> | [
"<|reference_start|> Recurrent computations for visual pattern completion: Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared to whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information. <|reference_end|>",
"<|reference_start|> Recurrent computations for visual pattern completion: Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared to whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information. <|reference_end|>",
"<|reference_start|> Neurons with graded response have collective computational properties like those of two-state neurons: A model for a large network of \"neurons\" with a graded response (or sigmoid input-output relation) is studied. This deterministic system has collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons. The content- addressable memory and other emergent collective properties of the original model also are present in the graded response model. The idea that such collective properties are used in biological systems is given added credence by the continued presence of such properties for more nearly biological \"neurons.\" Collective analog electrical circuits of the kind described will certainly function. The collective states of the two models have a simple correspondence. The original model will continue to be useful for simulations, because its connection to graded response systems is established. Equations that include the effect of action potentials in the graded response system are also developed. <|reference_end|>",
"<|reference_start|> Dense Associative Memory for Pattern Recognition: A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set. <|reference_end|>"
] | [
0,
1,
4,
5
] | {"<|cite_1|>": "arxiv-126209", "<|multi_cite_3_1|>": "arxiv-126209", "<|multi_cite_3_2|>": "arxiv-164258", "<|multi_cite_5_1|>": "ss-700561", "<|multi_cite_5_2|>": "ss-1538091", "<|multi_cite_5_3|>": "arxiv-99304", "<|multi_cite_5_4|>": "ss-1137128", "<|multi_cite_6_1|>": "ss-1383807", "<|multi_cite_6_2|>": "ss-1137759", "<|cite_7|>": "ss-1510290", "<|cite_8|>": "ss-1510291"} |
2401.14111 | <|paper_start|> Title: Image Synthesis with Graph Conditioning: CLIP-Guided Diffusion Models for Scene Graphs
Abstract: Image Synthesis with Graph Conditioning: CLIP-Guided Diffusion Models for Scene Graphs: Advancements in generative models have sparked significant interest in generating images while adhering to specific structural guidelines. Scene graph to image generation is one such task of generating images which are consistent with the given scene graph. However, the complexity of visual scenes poses a challenge in accurately aligning objects based on specified relations within the scene graph. Existing methods approach this task by first predicting a scene layout and generating images from these layouts using adversarial training. In this work, we introduce a novel approach to generate images from scene graphs which eliminates the need of predicting intermediate layouts. We leverage pre-trained text-to-image diffusion models and CLIP guidance to translate graph knowledge into images. Towards this, we first pre-train our graph encoder to align graph features with CLIP features of corresponding images using a GAN based training. Further, we fuse the graph features with CLIP embedding of object labels present in the given scene graph to create a graph consistent CLIP guided conditioning signal. In the conditioning input, object embeddings provide coarse structure of the image and graph features provide structural alignment based on relationships among objects. Finally, we fine tune a pre-trained diffusion model with the graph consistent conditioning signal with reconstruction and CLIP alignment loss. Elaborate experiments reveal that our method outperforms existing methods on standard benchmarks of COCO-stuff and Visual Genome dataset.
Introduction
Scene graph represents a visual scene as a graph where nodes correspond
to objects and edges represent relationships or interactions between these objects. Improved generative models now allow users to generate high quality images where they can control the style, structure or layout of the synthesised images. Such conditional image generation allow users to guide the generation using text <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|> <|cite_start|> (Reference: Zero-Shot Text-to-Image Generation: Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.) <|cite_end|>, segmentation mask <|cite_start|> (Reference: Semantic Image Synthesis with Spatially-Adaptive Normalization: We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to ``wash away'' semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows user control over both semantic and style. Code is available at https://github.com/NVlabs/SPADE .) <|cite_end|>, class labels <|cite_start|> (Reference: Diffusion Models Beat GANs on Image Synthesis: We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128$\times$128, 4.59 on ImageNet 256$\times$256, and 7.72 on ImageNet 512$\times$512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256$\times$256 and 3.85 on ImageNet 512$\times$512. We release our code at https://github.com/openai/guided-diffusion) <|cite_end|>, scene layout <|cite_start|> (Reference: High-Resolution Complex Scene Synthesis with Transformers: The use of coarse-grained layouts for controllable synthesis of complex scene images via deep generative models has recently gained popularity. However, results of current approaches still fall short of their promise of high-resolution synthesis. We hypothesize that this is mostly due to the highly engineered nature of these approaches which often rely on auxiliary losses and intermediate steps such as mask generators. In this note, we present an orthogonal approach to this task, where the generative model is based on pure likelihood training without additional objectives. To do so, we first optimize a powerful compression model with adversarial training which learns to reconstruct its inputs via a discrete latent bottleneck and thereby effectively strips the latent representation of high-frequency details such as texture. Subsequently, we train an autoregressive transformer model to learn the distribution of the discrete image representations conditioned on a tokenized version of the layouts. Our experiments show that the resulting system is able to synthesize high-quality images consistent with the given layouts. In particular, we improve the state-of-the-art FID score on COCO-Stuff and on Visual Genome by up to 19% and 53% and demonstrate the synthesis of images up to 512 x 512 px on COCO and Open Images.) <|cite_end|> <|cite_start|> (Reference: LayoutDiffusion: Controllable Diffusion Model for Layout-to-image Generation: Recently, diffusion models have achieved great success in image synthesis. However, when it comes to the layout-to-image generation where an image often has a complex scene of multiple objects, how to make strong control over both the global layout map and each detailed object remains a challenging task. In this paper, we propose a diffusion model named LayoutDiffusion that can obtain higher generation quality and greater controllability than the previous works. To overcome the difficult multimodal fusion of image and layout, we propose to construct a structural image patch with region information and transform the patched image into a special layout to fuse with the normal layout in a unified form. Moreover, Layout Fusion Module (LFM) and Object-aware Cross Attention (OaCA) are proposed to model the relationship among multiple objects and designed to be object-aware and position-sensitive, allowing for precisely controlling the spatial related information. Extensive experiments show that our LayoutDiffusion outperforms the previous SOTA methods on FID, CAS by relatively 46.35%, 26.70% on COCO-stuff and 44.29%, 41.82% on VG. Code is available at https://github.com/ZGCTroy/LayoutDiffusion.) <|cite_end|>, sketches, stroke paintings <|cite_start|> (Reference: The facile alkylation and iodination of imidazol(in)ium salts in the presence of cesium carbonate.: The alkylation or iodination of imidazol(in)ium salts takes place readily in the presence of Cs2CO3. The procedure is very easy to implement and provides facile and straightforward access to a wealth of C2-substituted azolium salts. Furthermore, a C2α alkylation is also feasible, which extends the chemistry of NHCs and weak bases to their NHO analogues.) <|cite_end|>, and such more conditional signals. In particular, use of text as a conditioning modality offers a versatile approach, allowing for diverse combinations of inputs, encompassing intricate and abstract concepts. However, leveraging text for conditioning is not without challenges. Natural language sentences tend to be lengthy and loosely structured, relying heavily on syntax for semantic interpretation. The inherent ambiguity in language, where different sentences may convey the same concept, poses a risk of instability during training. This becomes particularly apparent in scenarios where precise description constraints are crucial. In this context, relying solely on text representations for a specific scene may prove to be insufficient.
Motivated by promising results of conditional generation and limitation of text as a conditional signal, in this work we propose a novel method to generate images from scene graphs. First introduced by <|cite_start|> (Reference: Image Generation from Scene Graphs: To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.) <|cite_end|>, scene graph to image generation is a task of generating images using set of semantic object labels and underlying semantic relationships among these objects. Most of the existing works follow a two stage architecture where they first generate a scene layout and use GAN to synthesize realistic images from these scene layouts <|cite_start|> (Reference: Image Generation from Scene Graphs: To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.) <|cite_end|> <|cite_start|> (Reference: Specifying Object Attributes and Relations in Interactive Scene Generation: We introduce a method for the generation of images from an input scene graph. The method separates between a layout embedding and an appearance embedding. The dual embedding leads to generated images that better match the scene graph, have higher visual quality, and support more complex scene graphs. In addition, the embedding scheme supports multiple and diverse output images per scene graph, which can be further controlled by the user. We demonstrate two modes of per-object control: (i) importing elements from other images, and (ii) navigation in the object space, by selecting an appearance archetype. Our code is publicly available at https://www.github.com/ashual/scene_generation) <|cite_end|> <|cite_start|> (Reference: Scene Graph to Image Generation with Contextualized Object Layout Refinement: Generating images from scene graphs is a challenging task that attracted substantial interest recently. Prior works have approached this task by generating an intermediate layout description of the target image. However, the representation of each object in the layout was generated independently, which resulted in high overlap, low coverage, and an overall blurry layout. We propose a novel method that alleviates these issues by generating the entire layout description gradually to improve inter-object dependency. We empirically show on the COCO-STUFF dataset that our approach improves the quality of both the intermediate layout and the final image. Our approach improves the layout coverage by almost 20 points and drops object overlap to negligible amounts.) <|cite_end|>. Object nodes of scene graph is mapped to bounding boxes in the layout and the relationships are signified by the spatial structure of the layout. While these scene layouts can be effective in representing spatial relationships in the scenes, they fail to capture non-spatial complex relationships among objects. Relationships such as ``left of'', ``above'', ``sorrounding'' are spatial relationships, whereas ``looking at'', ``holding'', ``drinking'' are examples of non-spatial relationships. Translating scene graphs to accurate layouts and limiting representation capabilities of these layouts results in images inconsistent with the input scene graph.
To overcome the limitations of existing methodologies, we propose to learn an intermediate graph representation while eliminating the need of predicting scene layouts. We use this graph representation as a conditional signal to fine-tune a pre-trained text-to-image diffusion model to generate images conditioned on scene graphs. To harness the strong semantic understanding offered by diffusion models, we propose to predict a conditional graph representation that aligns well with the inherent semantic knowledge of diffusion models. We use CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> guidance to generate such graph embeddings.
We first employ a GAN-based CLIP alignment module to train our graph encoder. This module instructs the graph encoder to generate graph embeddings that closely resemble the visual features of corresponding images in the CLIP latent space. To construct an effective conditioning signal for diffusion model, we fuse output of graph encoder with semantic label embedding of objects present in the scene graph. This conditioning signal is prepared to leverage the high prior semantic understanding of text-to-image diffusion models. We use this conditioning signal to fine-tune the diffusion model to generate images conditioned on scene graph.
We demonstrate the effectiveness of our method using established benchmarks like Visual Genome <|cite_start|> (Reference: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that "the person is riding a horse-drawn carriage". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.) <|cite_end|> and COCO-stuff <|cite_start|> (Reference: COCO-Stuff: Thing and Stuff Classes in Context: Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.) <|cite_end|>. Comparisons with current state-of-the-art methods reveal superior quantitative and qualitative results. We can summarise our contributions as follows:
(1) We propose to learn an effective graph representation, eliminating the need of predicting intermediate layouts to synthesize images. We use this graph representation to construct a suitable conditioning signal for text-to-image diffusion model. This conditioning signal is guided to leverage the semantic knowledge of text-to-image diffusion model.
(2) We propose a training strategy that effectively employs the constructed conditioning signal to fine-tune the diffusion model.
Figure 1 shows the example images generated by our model with the original reference images. Images generated by our model follow the input scene graph. For example, objects and relationships specified in the input scene graph in Figure 1, such as ``Birds flying above Boat'' and ``wall surrounding stop sign'' are present in the generated images.
Related Work
\textbf{Diffusion as generative model.} The introduction of diffusion models by <|cite_start|> (Reference: Deep Unsupervised Learning using Nonequilibrium Thermodynamics: A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm.) <|cite_end|> marked a notable approach to image generation. These models operate by learning the reverse process of the forward diffusion, where input is transformed into Gaussian noise. The denosing process is implemented using U-net <|cite_start|> (Reference: Attention U-Net: Learning Where to Look for the Pancreas: We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.) <|cite_end|> or transformer <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> based models. In order to reduce the computation and training complexity, <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|> <|cite_start|> (Reference: Score-based Generative Modeling in Latent Space: Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage. However, they are usually applied directly in data space and often require thousands of network evaluations for sampling. Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space, relying on the variational autoencoder framework. Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space, resulting in fewer network evaluations and faster sampling. To enable training LSGMs end-to-end in a scalable and stable manner, we (i) introduce a new score-matching objective suitable to the LSGM setting, (ii) propose a novel parameterization of the score function that allows SGM to focus on the mismatch of the target distribution with respect to a simple Normal one, and (iii) analytically derive multiple techniques for variance reduction of the training objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10, outperforming all existing generative results on this dataset. On CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while outperforming them in sampling time by two orders of magnitude. In modeling binary images, LSGM achieves state-of-the-art likelihood on the binarized OMNIGLOT dataset. Our project page and code can be found at https://nvlabs.github.io/LSGM .) <|cite_end|> introduced diffusion models which operate in latent space. <|cite_start|> (Reference: Diffusion Models Beat GANs on Image Synthesis: We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128$\times$128, 4.59 on ImageNet 256$\times$256, and 7.72 on ImageNet 512$\times$512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256$\times$256 and 3.85 on ImageNet 512$\times$512. We release our code at https://github.com/openai/guided-diffusion) <|cite_end|> proposed conditional generation by diffusion using classifier guidance. Recent advancements in these latent diffusion models have enabled users to produce diverse and realistic high quality images conditioned on various factors such as text <|cite_start|> (Reference: Zero-Shot Text-to-Image Generation: Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.) <|cite_end|>, artistic style, sketch, pose, and class labels <|cite_start|> (Reference: Structure and Content-Guided Video Synthesis with Diffusion Models: Text-guided generative diffusion models unlock powerful image creation and editing tools. While these have been extended to video generation, current approaches that edit the content of existing footage while retaining structure require expensive re-training for every input or rely on error-prone propagation of image edits across frames. In this work, we present a structure and content-guided video diffusion model that edits videos based on visual or textual descriptions of the desired output. Conflicts between user-provided content edits and structure representations occur due to insufficient disentanglement between the two aspects. As a solution, we show that training on monocular depth estimates with varying levels of detail provides control over structure and content fidelity. Our model is trained jointly on images and videos which also exposes explicit control of temporal consistency through a novel guidance method. Our experiments demonstrate a wide variety of successes; fine-grained control over output characteristics, customization based on a few reference images, and a strong user preference towards results by our model.) <|cite_end|>. Diffusion can also be used to generate text <|cite_start|> (Reference: Bipartite Graph Network with Adaptive Message Passing for Unbiased Scene Graph Generation: Scene graph generation is an important visual understanding task with a broad range of vision applications. Despite recent tremendous progress, it remains challenging due to the intrinsic long-tailed class distribution and large intra-class variation. To address these issues, we introduce a novel confidence-aware bipartite graph neural network with adaptive message propagation mechanism for unbiased scene graph generation. In addition, we propose an efficient bi-level data resampling strategy to alleviate the imbalanced data distribution problem in training our graph network. Our approach achieves superior or competitive performance over previous methods on several challenging datasets, including Visual Genome, Open Images V4/V6, demonstrating its effectiveness and generality.) <|cite_end|> <|cite_start|> (Reference: Text Generation with Diffusion Language Models: A Pre-training Approach with Continuous Paragraph Denoise: In this paper, we introduce a novel dIffusion language modEl pre-training framework for text generation, which we call GENIE. GENIE is a large-scale pretrained diffusion language model that consists of an encoder and a diffusion-based decoder, which can generate text by gradually transforming a random noise sequence into a coherent text sequence. To pre-train GENIE on a large-scale language corpus, we design a new continuous paragraph denoise objective, which encourages the diffusion-decoder to reconstruct a clean text paragraph from a corrupted version, while preserving the semantic and syntactic coherence. We evaluate GENIE on four downstream text generation benchmarks, namely XSum, CNN/DailyMail, Gigaword, and CommonGen. Our experimental results show that GENIE achieves comparable performance with the state-of-the-art autoregressive models on these benchmarks, and generates more diverse text samples. The code and models of GENIE are available at https://github.com/microsoft/ProphetNet/tree/master/GENIE.) <|cite_end|>, videos <|cite_start|> (Reference: Structure and Content-Guided Video Synthesis with Diffusion Models: Text-guided generative diffusion models unlock powerful image creation and editing tools. While these have been extended to video generation, current approaches that edit the content of existing footage while retaining structure require expensive re-training for every input or rely on error-prone propagation of image edits across frames. In this work, we present a structure and content-guided video diffusion model that edits videos based on visual or textual descriptions of the desired output. Conflicts between user-provided content edits and structure representations occur due to insufficient disentanglement between the two aspects. As a solution, we show that training on monocular depth estimates with varying levels of detail provides control over structure and content fidelity. Our model is trained jointly on images and videos which also exposes explicit control of temporal consistency through a novel guidance method. Our experiments demonstrate a wide variety of successes; fine-grained control over output characteristics, customization based on a few reference images, and a strong user preference towards results by our model.) <|cite_end|> <|cite_start|> (Reference: Video Probabilistic Diffusion Models in Projected Latent Space: Despite the remarkable progress in deep generative models, synthesizing high-resolution and temporally coherent videos still remains a challenge due to their high-dimensionality and complex temporal dynamics along with large spatial variations. Recent works on diffusion models have shown their potential to solve this challenge, yet they suffer from severe computation- and memory-inefficiency that limit the scalability. To handle this issue, we propose a novel generative model for videos, coined projected latent video diffusion models (PVDM), a probabilistic diffusion model which learns a video distribution in a low-dimensional latent space and thus can be efficiently trained with high-resolution videos under limited resources. Specifically, PVDM is composed of two components: (a) an autoencoder that projects a given video as 2D-shaped latent vectors that factorize the complex cubic structure of video pixels and (b) a diffusion model architecture specialized for our new factorized latent space and the training/sampling procedure to synthesize videos of arbitrary length with a single model. Experiments on popular video generation datasets demonstrate the superiority of PVDM compared with previous video synthesis methods; e.g., PVDM obtains the FVD score of 639.7 on the UCF-101 long video (128 frames) generation benchmark, which improves 1773.4 of the prior state-of-the-art.) <|cite_end|> and graphs <|cite_start|> (Reference: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations: Generating graph-structured data requires learning the underlying distribution of graphs. Yet, this is a challenging problem, and the previous graph generative methods either fail to capture the permutation-invariance property of graphs or cannot sufficiently model the complex dependency between nodes and edges, which is crucial for generating real-world graphs such as molecules. To overcome such limitations, we propose a novel score-based generative model for graphs with a continuous-time framework. Specifically, we propose a new graph diffusion process that models the joint distribution of the nodes and edges through a system of stochastic differential equations (SDEs). Then, we derive novel score matching objectives tailored for the proposed diffusion process to estimate the gradient of the joint log-density with respect to each component, and introduce a new solver for the system of SDEs to efficiently sample from the reverse diffusion process. We validate our graph generation method on diverse datasets, on which it either achieves significantly superior or competitive performance to the baselines. Further analysis shows that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule, demonstrating the effectiveness of the system of SDEs in modeling the node-edge relationships. Our code is available at https://github.com/harryjo97/GDSS.) <|cite_end|>. The conditioning is applied using cross attention mechanism between output of individual layers of denoising U-net and given conditional signals. Similar techniques are employed in text-to-image latent diffusion models. <|cite_start|> (Reference: Hierarchical Text-Conditional Image Generation with CLIP Latents: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.) <|cite_end|> uses CLIP latent of text to condition high quality image generation. <|cite_start|> (Reference: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding: We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results.) <|cite_end|> uses latent from large language models such as T5 to condition latent diffusion models. While there has been notable progress in the diffusion-based conditional image synthesis, there exists a notable gap in the exploration of image generation from graph-structured data.
In this work we explore the capabilities of text-to-image diffusion models in the task of image generation conditioned on scene graphs.
\textbf{Image generation from scene graphs.} Scene graph represents an image using set of nodes and edges. Nodes represent objects present in the image and their underlying relationships are captured by edges. Conventional scene graph to image methods tackle this task by following two stage architecture. At first a scene layout is predicted from graph. Scene layout represents an image using bounding boxes of corresponding objects present in the image. Scene layout is then translated into an image using convolution neural network based image synthesis models such as SPADE <|cite_start|> (Reference: Semantic Image Synthesis with Spatially-Adaptive Normalization: We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to ``wash away'' semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows user control over both semantic and style. Code is available at https://github.com/NVlabs/SPADE .) <|cite_end|>, OC-GAN <|cite_start|> (Reference: Object-Centric Image Generation from Layouts: Despite recent impressive results on single-object and single-domain image generation, the generation of complex scenes with multiple objects remains challenging. In this paper, we start with the idea that a model must be able to understand individual objects and relationships between objects in order to generate complex scenes well. Our layout-to-image-generation method, which we call Object-Centric Generative Adversarial Network (or OC-GAN), relies on a novel Scene-Graph Similarity Module (SGSM). The SGSM learns representations of the spatial relationships between objects in the scene, which lead to our model's improved layout-fidelity. We also propose changes to the conditioning mechanism of the generator that enhance its object instance-awareness. Apart from improving image quality, our contributions mitigate two failure modes in previous approaches: (1) spurious objects being generated without corresponding bounding boxes in the layout, and (2) overlapping bounding boxes in the layout leading to merged objects in images. Extensive quantitative evaluation and ablation studies demonstrate the impact of our contributions, with our model outperforming previous state-of-the-art approaches on both the COCO-Stuff and Visual Genome datasets. Finally, we address an important limitation of evaluation metrics used in previous works by introducing SceneFID -- an object-centric adaptation of the popular Fr{\'e}chet Inception Distance metric, that is better suited for multi-object images.) <|cite_end|>. This task was first introduced by <|cite_start|> (Reference: Image Generation from Scene Graphs: To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.) <|cite_end|>. They employ a multi layer graph convolution network <|cite_start|> (Reference: Semi-Supervised Classification with Graph Convolutional Networks: We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.) <|cite_end|> to get graph representation. This graph representation is used to predict object bounding boxes. The boxes are then used to generate images using cascaded refinement network. Generation is guided by GAN-based setup where a discriminator is employed to generate realistic images. Following <|cite_start|> (Reference: Image Generation from Scene Graphs: To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.) <|cite_end|> subsequent works adopt the two stage approach combined with GAN-based generation. <|cite_start|> (Reference: Specifying Object Attributes and Relations in Interactive Scene Generation: We introduce a method for the generation of images from an input scene graph. The method separates between a layout embedding and an appearance embedding. The dual embedding leads to generated images that better match the scene graph, have higher visual quality, and support more complex scene graphs. In addition, the embedding scheme supports multiple and diverse output images per scene graph, which can be further controlled by the user. We demonstrate two modes of per-object control: (i) importing elements from other images, and (ii) navigation in the object space, by selecting an appearance archetype. Our code is publicly available at https://www.github.com/ashual/scene_generation) <|cite_end|> provides a way to control style of generated objects by providing a module to capture the style information of objects. <|cite_start|> (Reference: Learning Canonical Representations for Scene Graph to Image Generation: Generating realistic images of complex visual scenes becomes challenging when one wishes to control the structure of the generated images. Previous approaches showed that scenes with few entities can be controlled using scene graphs, but this approach struggles as the complexity of the graph (the number of objects and edges) increases. In this work, we show that one limitation of current methods is their inability to capture semantic equivalence in graphs. We present a novel model that addresses these issues by learning canonical graph representations from the data, resulting in improved image generation for complex visual scenes. Our model demonstrates improved empirical performance on large scene graphs, robustness to noise in the input scene graph, and generalization on semantically equivalent graphs. Finally, we show improved performance of the model on three different benchmarks: Visual Genome, COCO, and CLEVR.) <|cite_end|> use canonicalization for scene graph representation before translating it into scene layouts. This enhances the graph representation by incorporating supplementary information for semantic equivalence. <|cite_start|> (Reference: Scene Graph to Image Generation with Contextualized Object Layout Refinement: Generating images from scene graphs is a challenging task that attracted substantial interest recently. Prior works have approached this task by generating an intermediate layout description of the target image. However, the representation of each object in the layout was generated independently, which resulted in high overlap, low coverage, and an overall blurry layout. We propose a novel method that alleviates these issues by generating the entire layout description gradually to improve inter-object dependency. We empirically show on the COCO-STUFF dataset that our approach improves the quality of both the intermediate layout and the final image. Our approach improves the layout coverage by almost 20 points and drops object overlap to negligible amounts.) <|cite_end|> introduces an overlap loss to eliminate object overlapping. <|cite_start|> (Reference: Transformer-based Image Generation from Scene Graphs: Graph-structured scene descriptions can be efficiently used in generative models to control the composition of the generated image. Previous approaches are based on the combination of graph convolutional networks and adversarial methods for layout prediction and image generation, respectively. In this work, we show how employing multi-head attention to encode the graph information, as well as using a transformer-based model in the latent space for image generation can improve the quality of the sampled data, without the need to employ adversarial models with the subsequent advantage in terms of training stability. The proposed approach, specifically, is entirely based on transformer architectures both for encoding scene graphs into intermediate object layouts and for decoding these layouts into images, passing through a lower dimensional space learned by a vector-quantized variational autoencoder. Our approach shows an improved image quality with respect to state-of-the-art methods as well as a higher degree of diversity among multiple generations from the same scene graph. We evaluate our approach on three public datasets: Visual Genome, COCO, and CLEVR. We achieve an Inception Score of 13.7 and 12.8, and an FID of 52.3 and 60.3, on COCO and Visual Genome, respectively. We perform ablation studies on our contributions to assess the impact of each component. Code is available at https://github.com/perceivelab/trf-sg2im) <|cite_end|> use transformers for image generation. They learn layout representation using graph transformer. Further an image transformer coupled with VQ-VAE <|cite_start|> (Reference: Generating Diverse High-Fidelity Images with VQ-VAE-2: We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.) <|cite_end|> is used to sample images from these layouts. <|cite_start|> (Reference: SceneGenie: Scene Graph Guided Diffusion Models for Image Synthesis: Text-conditioned image generation has made significant progress in recent years with generative adversarial networks and more recently, diffusion models. While diffusion models conditioned on text prompts have produced impressive and high-quality images, accurately representing complex text prompts such as the number of instances of a specific object remains challenging. To address this limitation, we propose a novel guidance approach for the sampling process in the diffusion model that leverages bounding box and segmentation map information at inference time without additional training data. Through a novel loss in the sampling process, our approach guides the model with semantic features from CLIP embeddings and enforces geometric constraints, leading to high-resolution images that accurately represent the scene. To obtain bounding box and segmentation map information, we structure the text prompt as a scene graph and enrich the nodes with CLIP embeddings. Our proposed model achieves state-of-the-art performance on two public benchmarks for image generation from scene graphs, surpassing both scene graph to image and text-based diffusion models in various metrics. Our results demonstrate the effectiveness of incorporating bounding box and segmentation map guidance in the diffusion model sampling process for more accurate text-to-image generation.) <|cite_end|> uses scene layout and segmentation masks at sampling time of diffusion to generate graph aligned images. <|cite_start|> (Reference: Learning object consistency and interaction in image generation from scene graphs: This paper is concerned with synthesizing images conditioned on a scene graph (SG), a set of object nodes and their edges of interactive relations. We divide existing works into image-oriented and code-oriented methods. In our analysis, the image-oriented methods do not consider object interaction in spatial hidden feature. On the other hand, in empirical study, the code-oriented methods lose object consistency as their generated images miss certain objects in the input scene graph. To alleviate these two issues, we propose Learning Object Consistency and Interaction (LOCI). To preserve object consistency, we design a consistency module with a weighted augmentation strategy for objects easy to be ignored and a matching loss between scene graphs and image codes. To learn object interaction, we design an interaction module consisting of three kinds of message propagation between the input scene graph and the learned image code. Experiments on COCO-stuff and Visual Genome datasets show our proposed method alleviates the ignorance of objects and outperforms the state-of-the-art on visual fidelity of generated images and objects.) <|cite_end|> introduce a consistency module to overcome negligence of smaller object in the generated images.
Most of the existing works utilise a layout based representation of graphs and GAN-based image generation. In this work we propose to use a graph representation which aligns well with the semantic prior of diffusion models. We use this aligned graph representation as a conditioning signal for diffusion based image generation. Notably, we eliminate the need for layout generation and convert the two stage to single stage generation. <|paper_end|> | [
"<|reference_start|> Deep Unsupervised Learning using Nonequilibrium Thermodynamics: A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm. <|reference_end|>",
"<|reference_start|> Attention U-Net: Learning Where to Look for the Pancreas: We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available. <|reference_end|>",
"<|reference_start|> High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion . <|reference_end|>",
"<|reference_start|> Generating Diverse High-Fidelity Images with VQ-VAE-2: We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity. <|reference_end|>"
] | [
14,
15,
17,
38
] | {"<|multi_cite_1_1|>": "arxiv-388766", "<|multi_cite_1_2|>": "arxiv-323257", "<|cite_2|>": "arxiv-195738", "<|cite_3|>": "arxiv-340336", "<|multi_cite_4_1|>": "arxiv-340819", "<|multi_cite_4_2|>": "arxiv-493245", "<|cite_5|>": "ss-717429", "<|cite_6|>": "arxiv-153836", "<|multi_cite_7_1|>": "arxiv-153836", "<|multi_cite_7_2|>": "arxiv-223237", "<|multi_cite_7_3|>": "arxiv-291560", "<|cite_8|>": "arxiv-323919", "<|cite_9|>": "arxiv-92776", "<|cite_10|>": "arxiv-112341", "<|cite_11|>": "arxiv-74487", "<|cite_12|>": "arxiv-154585", "<|cite_13|>": "arxiv-126595", "<|multi_cite_14_1|>": "arxiv-388766", "<|multi_cite_14_2|>": "arxiv-347450", "<|cite_15|>": "arxiv-340336", "<|cite_16|>": "arxiv-323257", "<|cite_17|>": "arxiv-479567", "<|multi_cite_18_1|>": "arxiv-331521", "<|multi_cite_18_2|>": "arxiv-471317", "<|multi_cite_19_1|>": "arxiv-479567", "<|multi_cite_19_2|>": "arxiv-481683", "<|cite_20|>": "arxiv-397116", "<|cite_21|>": "arxiv-412781", "<|cite_22|>": "arxiv-421636", "<|cite_23|>": "arxiv-195738", "<|cite_24|>": "arxiv-254012", "<|cite_25|>": "arxiv-153836", "<|cite_26|>": "arxiv-105493", "<|cite_27|>": "arxiv-153836", "<|cite_28|>": "arxiv-223237", "<|cite_29|>": "arxiv-239561", "<|cite_30|>": "arxiv-291560", "<|cite_31|>": "arxiv-487172", "<|cite_32|>": "arxiv-207475", "<|cite_33|>": "arxiv-500616", "<|cite_34|>": "ss-759977"} |
2310.20081 | <|paper_start|> Title: Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models
Abstract: Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models: Personalization, the ability to tailor a system to individual users, is an essential factor in user experience with natural language processing (NLP) systems. With the emergence of Large Language Models (LLMs), a key question is how to leverage these models to better personalize user experiences. To personalize a language model's output, a straightforward approach is to incorporate past user data into the language model prompt, but this approach can result in lengthy inputs exceeding limitations on input length and incurring latency and cost issues. Existing approaches tackle such challenges by selectively extracting relevant user data (i.e. selective retrieval) to construct a prompt for downstream tasks. However, retrieval-based methods are limited by potential information loss, lack of more profound user understanding, and cold-start challenges. To overcome these limitations, we propose a novel summary-augmented approach by extending retrieval-augmented personalization with task-aware user summaries generated by LLMs. The summaries can be generated and stored offline, enabling real-world systems with runtime constraints like voice assistants to leverage the power of LLMs. Experiments show our method with 75% less of retrieved user data is on-par or outperforms retrieval augmentation on most tasks in the LaMP personalization benchmark. We demonstrate that offline summarization via LLMs and runtime retrieval enables better performance for personalization on a range of tasks under practical constraints.
Introduction
As virtual assistants and other natural language processing (NLP) systems become increasingly integrated into our daily lives, personalization has become an essential factor in user experience. Tailoring virtual assistant interactions and NLP model outputs to individual users' preferences, styles, needs, and contexts is essential in improving the performance of these systems to make them more natural and conversational.
Traditional personalization methods, such as collaborative filtering <|cite_start|> (Reference: Collaborative {Filtering: Social voting is a promising new feature in online social networks. It has distinctive challenges and opportunities for suggestion. In this paper, we increase a set of matrix factorization (MF) a nearest-neighbor (NN)-based recommended systems (RSs) that explore user social network and group association information for social voting recommendation. During experiments with actual social voting traces, we express that social network and group association information can drastically progress the popularity-based voting advice, and social network in order dominates group association sequence in NN-based approaches. We as well observe that social and group information is much more precious to cold users than to heavy users. In our experiments, simple meta path based nearest neighbor models outperform computation concentrated on matrix factorization models in hot voting recommendation, while user's preferences non-hot votings can be better mined b factorization models. We further put forward a hybrid RS, bagging distinct single approaches to get the best top-k hit rate.) <|cite_end|>, deep neural networks <|cite_start|> (Reference: Deep {Neural: Sequence-to-Sequence is a powerful paradigm of formulating machine learning problems. Broadly, as long as we can formulate a problem as a mapping from a sequence of inputs to a sequence of outputs, we can use sequence-to-sequence models to solve it. For example, in machine translation, we can formulate the problem as a mapping from a sequence of words in one language to a sequence of words in another language. While some RNN architectures we previously covered possess the capability to maintain a memory of the previous inputs/outputs, to compute output, the memory states need to encompass information of many previous states, which can be difficult especially when performing tasks with long-term dependencies.) <|cite_end|>, deep interest network <|cite_start|> (Reference: Deep Interest Network for Click-Through Rate Prediction: Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.) <|cite_end|> and their variations <|cite_start|> (Reference: Search-based User Interest Modeling with Lifelong Sequential Behavior Data for Click-Through Rate Prediction: Rich user behavior data has been proven to be of great value for click-through rate prediction tasks, especially in industrial applications such as recommender systems and online advertising. Both industry and academy have paid much attention to this topic and propose different approaches to modeling with long sequential user behavior data. Among them, memory network based model MIMN proposed by Alibaba, achieves SOTA with the co-design of both learning algorithm and serving system. MIMN is the first industrial solution that can model sequential user behavior data with length scaling up to 1000. However, MIMN fails to precisely capture user interests given a specific candidate item when the length of user behavior sequence increases further, say, by 10 times or more. This challenge exists widely in previously proposed approaches. In this paper, we tackle this problem by designing a new modeling paradigm, which we name as Search-based Interest Model (SIM). SIM extracts user interests with two cascaded search units: (i) General Search Unit (GSU) acts as a general search from the raw and arbitrary long sequential behavior data, with query information from candidate item, and gets a Sub user Behavior Sequence (SBS) which is relevant to candidate item; (ii) Exact Search Unit (ESU) models the precise relationship between candidate item and SBS. This cascaded search paradigm enables SIM with a better ability to model lifelong sequential behavior data in both scalability and accuracy. Apart from the learning algorithm, we also introduce our hands-on experience on how to implement SIM in large scale industrial systems. Since 2019, SIM has been deployed in the display advertising system in Alibaba, bringing 7.1% CTR and 4.4% RPM lift, which is significant to the business. Serving the main traffic in our real system now, SIM models sequential user behavior data with maximum length reaching up to 54000, pushing SOTA to 54x.) <|cite_end|> <|cite_start|> (Reference: End-to-{{End: Driven by the aggressive scaling of modern IC technologies, network-on-chip (NoC) becomes increasingly susceptible to various noise sources. In this paper, adaptive error correction code injection scheme is represented to achieve both high reliability and low latency in various temperature condition. Simulation results show that the latency of proposed scheme is 50∼57% of the 2G4L code without(with smaller) reliability degradation.) <|cite_end|>, have enhanced user experiences in recommendation systems. These methods leverage historical user behavior data to make personalized recommendations, offering a practical and effective solution for various domains. Despite their success, these methods still struggle with the cold-start problem, where new users lack sufficient behavior history, leading to sub-optimal recommendations. The cold-start problem highlights the need for alternative approaches.
Large Language Models (LLMs) represent a promising avenue for advancing personalization techniques. LLMs have demonstrated remarkable capabilities in understanding context and generating coherent text <|cite_start|> (Reference: Language Models: A language modeling overview, highlighting basic concepts, intuitive explanations, technical achievements, and fundamental challenges.) <|cite_end|>. By incorporating knowledge about users, LLMs can potentially enhance personalization by capturing subtle user preferences, but how to capture the full spectrum of user preferences in a personalized manner remains a challenge. To personalize a language model output, a straightforward approach is to incorporate user data into the language model prompt. However, incorporating a comprehensive view of customer preferences with long-term historical user data into the prompt may exceed the input length limitations of language models and result in considerable increases in inference cost. Further, language models tend to degrade with lengthy contexts <|cite_start|> (Reference: How Many Demonstrations Do You Need for In-context Learning?: Large language models (LLMs) are capable to perform complex reasoning by in-context learning (ICL) when provided with a few input-output demonstrations (demos) and more powerful when intermediate reasoning steps ("chain of thoughts (CoT)") of the demos are given. Is it necessary to use multi-demo in ICL? In this paper, we study ICL using fewer demos for each test query on the tasks in~\cite{wei2022chain}. Surprisingly, we do not observe significant degradation when using only one randomly chosen demo. To study this phenomenon, for each test query, we categorize demos into "correct demos" leading to the correct answer, and "wrong demos" resulting in wrong answers. Our analysis reveals an inherent bias in those widely studied datasets: most demos are correct for a majority of test queries, which explains the good performance of using one random demo. Moreover, ICL (with and w/o CoT) using only one correct demo significantly outperforms all-demo ICL adopted by most previous works, indicating the weakness of LLMs in finding correct demo(s) for input queries, which is difficult to evaluate on the biased datasets. Furthermore, we observe a counterintuitive behavior of ICL using multi-demo, i.e., its accuracy degrades(improves) when given more correct(wrong) demos. This implies that ICL can be easily misguided by interference among demos and their spurious correlations. Our analyses highlight several fundamental challenges that need to be addressed in LLMs training, ICL, and benchmark design.) <|cite_end|>.
To address these concerns, a personalized retrieval augmentation framework was proposed <|cite_start|> (Reference: LaMP: When Large Language Models Meet Personalization: This paper highlights the importance of personalization in large language models and introduces the LaMP benchmark -- a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evaluation framework with diverse language tasks and multiple entries for each user profile. It consists of seven personalized tasks, spanning three text classification and four text generation tasks. We additionally propose two retrieval augmentation approaches that retrieve personal items from each user profile for personalizing language model outputs. To this aim, we study various retrieval models, including term matching, semantic matching, and time-aware methods. Extensive experiments on LaMP for zero-shot and fine-tuned language models demonstrate the efficacy of the proposed retrieval augmentation approach and highlight the impact of personalization in various natural language tasks.) <|cite_end|>. This framework selectively extracts relevant user data to construct prompts for downstream language models. Recent work has also shown promise in combining retrieval approaches with LLMs to improve performance in recommender systems <|cite_start|> (Reference: PALR: Personalization Aware LLMs for Recommendation: Large language models (LLMs) have recently received significant attention for their exceptional capabilities. Despite extensive efforts in developing general-purpose LLMs that can be utilized in various natural language processing (NLP) tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named PALR, which aiming to combine user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval. Then we adopt a LLM-based ranking model to generate recommended items. Unlike existing approaches that typically adopt general-purpose LLMs for zero/few-shot recommendation testing or training on small-sized language models (with less than 1 billion parameters), which cannot fully elicit LLMs' reasoning abilities and leverage rich item side parametric knowledge, we fine-tune a 7 billion parameters LLM for the ranking purpose. This model takes retrieval candidates in natural language format as input, with instruction which explicitly asking to select results from input candidates during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks.) <|cite_end|> <|cite_start|> (Reference: Rethinking Personalized Ranking at Pinterest: An End-to-End Approach: In this work, we present our journey to revolutionize the personalized recommendation engine through end-to-end learning from raw user actions. We encode user's long-term interest in Pinner- Former, a user embedding optimized for long-term future actions via a new dense all-action loss, and capture user's short-term intention by directly learning from the real-time action sequences. We conducted both offline and online experiments to validate the performance of the new model architecture, and also address the challenge of serving such a complex model using mixed CPU/GPU setup in production. The proposed system has been deployed in production at Pinterest and has delivered significant online gains across organic and Ads applications.) <|cite_end|> <|cite_start|> (Reference: GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation: Recent advancements in Natural Language Processing (NLP) have led to the development of NLP-based recommender systems that have shown superior performance. However, current models commonly treat items as mere IDs and adopt discriminative modeling, resulting in limitations of (1) fully leveraging the content information of items and the language modeling capabilities of NLP models; (2) interpreting user interests to improve relevance and diversity; and (3) adapting practical circumstances such as growing item inventories. To address these limitations, we present GPT4Rec, a novel and flexible generative framework inspired by search engines. It first generates hypothetical "search queries" given item titles in a user's history, and then retrieves items for recommendation by searching these queries. The framework overcomes previous limitations by learning both user and item embeddings in the language space. To well-capture user interests with different aspects and granularity for improving relevance and diversity, we propose a multi-query generation technique with beam search. The generated queries naturally serve as interpretable representations of user interests and can be searched to recommend cold-start items. With GPT-2 language model and BM25 search engine, our framework outperforms state-of-the-art methods by $75.7\%$ and $22.2\%$ in Recall@K on two public datasets. Experiments further revealed that multi-query generation with beam search improves both the diversity of retrieved items and the coverage of a user's multi-interests. The adaptiveness and interpretability of generated queries are discussed with qualitative case studies.) <|cite_end|>, as well as general NLP tasks <|cite_start|> (Reference: Personalization and Relevance in NLG: Despite the recent advances in language modeling techniques, personalization remains a challenge for many NLP tasks. In this talk, we will explore personalization through several different lens to understand how we can make progress on this front, and emphasize why human-centered approach is a crucial part of the solution. First, I will challenge the ground-truth assumption in the context of user or situation sensitive language tasks. In other words, I will argue that the same question might be addressed differently by a system, depending on the user or the situation they are currently facing. Then we’ll discuss what are our user needs, and how can we design these tasks to produce useful and relevant responses, but also what potential harms we should be aware of working on personalization [5]. Next, we will look into personalization in the augmentative and alternative communication (AAC) world. Specifically, how through an icon-based language, individuals with compromised language abilities (that may arise due to Traumatic Brain injury (TBI) or Cerebral Palsy (CP)), we can accommodate their needs and what are the challenges in developing icon-based language models [3]. Third, in the process of personalization, models are expected to accommodate and adapt to the specific language and jargon spoken by the user. What are remaining challenges for deep learning architectures in the process of adapting user data or new domains [2, 4]. Finally, I will share work-in-progress where through Wizard-of-Oz (WoZ) experiments [1] we identify and learn useful actions of social conversational systems in classroom setting.) <|cite_end|> <|cite_start|> (Reference: Returning the N to NLP: Towards Contextually Personalized Classification Models: Most NLP models today treat language as universal, even though socio- and psycholingustic research shows that the communicated message is influenced by the characteristics of the speaker as well as the target audience. This paper surveys the landscape of personalization in natural language processing and related fields, and offers a path forward to mitigate the decades of deviation of the NLP tools from sociolingustic findings, allowing to flexibly process the “natural” language of each user rather than enforcing a uniform NLP treatment. It outlines a possible direction to incorporate these aspects into neural NLP models by means of socially contextual personalization, and proposes to shift the focus of our evaluation strategies accordingly.) <|cite_end|> <|cite_start|> (Reference: Pchatbot: A Large-Scale Dataset for Personalized Chatbot: Natural language dialogue systems raise great attention recently. As many dialogue models are data-driven, high-quality datasets are essential to these systems. In this paper, we introduce Pchatbot, a large-scale dialogue dataset that contains two subsets collected from Weibo and Judicial forums respectively. To adapt the raw dataset to dialogue systems, we elaborately normalize the raw dataset via processes such as anonymization, deduplication, segmentation, and filtering. The scale of Pchatbot is significantly larger than existing Chinese datasets, which might benefit the data-driven models. Besides, current dialogue datasets for personalized chatbot usually contain several persona sentences or attributes. Different from existing datasets, Pchatbot provides anonymized user IDs and timestamps for both posts and responses. This enables the development of personalized dialogue models that directly learn implicit user personality from the user's dialogue history. Our preliminary experimental study benchmarks several state-of-the-art dialogue models to provide a comparison for future work. The dataset can be publicly accessed at Github.) <|cite_end|> <|cite_start|> (Reference: AI and Personalization: This paper reviews the recent developments at the intersection of personalization and AI in marketing and related fields. We provide a formal definition of personalized policy and review the methodological approaches available for personalization. We discuss scalability, generalizability, and counterfactual validity issues and briefly touch upon advanced methods for online/interactive/dynamic settings. We then summarize the three evaluation approaches for static policies – the Direct method, the Inverse Propensity Score estimator, and the Doubly Robust method. Next, we present a summary of the evaluation approaches for special cases such as continuous actions and dynamic settings. We then summarize the findings on the returns to personalization across various domains, including content recommendation, advertising, and promotions. Next, we discuss the work on the intersection between personalization and welfare. We focus on four of these welfare notions that have been studied in the literature: (1) search costs, (2) privacy, (3) fairness, and (4) polarization. We conclude with a discussion of the remaining challenges and some directions for future research.) <|cite_end|> <|cite_start|> (Reference: Personalized Response Generation via Generative Split Memory Network: Despite the impressive successes of generation and dialogue systems, how to endow a text generation system with particular personality traits to deliver more personalized responses remains under-investigated. In this work, we look at how to generate personalized responses for questions on Reddit by utilizing personalized user profiles and posting histories. Specifically, we release an open-domain single-turn dialog dataset made up of 1.5M conversation pairs together with 300k profiles of users and related comments. We then propose a memory network to generate personalized responses in dialogue that utilizes a novel mechanism of splitting memories: one for user profile meta attributes and the other for user-generated information like comment histories. Experimental results show the quantitative and qualitative improvements of our simple split memory network model over the state-of-the-art response generation baselines.) <|cite_end|>.
However, retrieval-based methods have constraints in potential information loss, lack the ability to comprehend user data on a more profound level, and may suffer from the cold-start problem.
Our research aims to address the aforementioned limitations of both traditional personalization methods and retrieval-based methods with LLMs by proposing the hybrid approach shown in Figure \ref{fig:main}. By integrating retrieval techniques with LLM-generated summaries of user data, we intend to create a more robust personalized system.
To prevent information loss, the user summary offers contextual information at a higher level of abstraction for the downstream task.
To understand user data on a more profound level, the summary generation is aware of the task and incorporates this information in the prompt for summary generation. For example, for a personalized paraphrase text generation task, the summary model is instructed by a prompt to pay attention to the user writing style in addition to the semantic content.
Also, this hybrid model could overcome the cold-start problem and provide personalized outputs even in data-sparse scenarios by providing user summaries for new users based on available user data from other applications or user's self description.
The summaries in our approach can be generated offline and stored, ensuring negligible increased runtime latency and enabling systems with runtime constraints to leverage the power of LLMs into real-work online applications, such as voice assistant scenarios.
We demonstrate our method of integrating summarization and retrieval on a publicly available Language Model Personalization (LaMP) benchmark <|cite_start|> (Reference: LaMP: When Large Language Models Meet Personalization: This paper highlights the importance of personalization in large language models and introduces the LaMP benchmark -- a novel benchmark for training and evaluating language models for producing personalized outputs. LaMP offers a comprehensive evaluation framework with diverse language tasks and multiple entries for each user profile. It consists of seven personalized tasks, spanning three text classification and four text generation tasks. We additionally propose two retrieval augmentation approaches that retrieve personal items from each user profile for personalizing language model outputs. To this aim, we study various retrieval models, including term matching, semantic matching, and time-aware methods. Extensive experiments on LaMP for zero-shot and fine-tuned language models demonstrate the efficacy of the proposed retrieval augmentation approach and highlight the impact of personalization in various natural language tasks.) <|cite_end|>, including both text classification and generation tasks across a variety of domains.
Experiments show our method achieves comparable or better performance compared to retrieval augmentation on most tasks.
With our method, the retrieval component can use 75\% less of retrieved user data without sacrificing performance on five out of six tasks, and achieves superior performance on two tasks.
In summary, our main contributions are as follows. First, we propose augmenting traditional retrieval-based personalization methods with LLMs' summarization of user data to address the limitations of existing methods: potential information loss, the inability to understand user data at a high level, and the cold-start challenge. Our method enables powerful LLMs to provide comprehensive information about users with no additional runtime latency. Further, we implemented our proposed approach and conducted experiments on a language model personalization benchmark dataset LaMP with 6 public tasks. With promise shown in our experiment results, we envision a personalized system that better caters to individual user preferences especially for new users by integrating summarization via LLMs and retrieval.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{images/fig1.png}
\caption{Personalization is achieved by combining runtime-retrieved samples with an offline-generated user summary. Given a textual
input $x$ that describes a task in natural language, the goal is to generate a personalized output $y$ for users. The retrieval model identifies the most relevant items from user data, and the retrieved items along with the offline user summary and $x$ form the basis for creating a prompt. This prompt is constructed using a prompt construction function $\phi_p$.
}
\label{fig:main}
\end{figure} <|paper_end|> | [
"<|reference_start|> End-to-{{End: Driven by the aggressive scaling of modern IC technologies, network-on-chip (NoC) becomes increasingly susceptible to various noise sources. In this paper, adaptive error correction code injection scheme is represented to achieve both high reliability and low latency in various temperature condition. Simulation results show that the latency of proposed scheme is 50∼57% of the 2G4L code without(with smaller) reliability degradation. <|reference_end|>",
"<|reference_start|> PALR: Personalization Aware LLMs for Recommendation: Large language models (LLMs) have recently received significant attention for their exceptional capabilities. Despite extensive efforts in developing general-purpose LLMs that can be utilized in various natural language processing (NLP) tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named PALR, which aiming to combine user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval. Then we adopt a LLM-based ranking model to generate recommended items. Unlike existing approaches that typically adopt general-purpose LLMs for zero/few-shot recommendation testing or training on small-sized language models (with less than 1 billion parameters), which cannot fully elicit LLMs' reasoning abilities and leverage rich item side parametric knowledge, we fine-tune a 7 billion parameters LLM for the ranking purpose. This model takes retrieval candidates in natural language format as input, with instruction which explicitly asking to select results from input candidates during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks. <|reference_end|>",
"<|reference_start|> Rethinking Personalized Ranking at Pinterest: An End-to-End Approach: In this work, we present our journey to revolutionize the personalized recommendation engine through end-to-end learning from raw user actions. We encode user's long-term interest in Pinner- Former, a user embedding optimized for long-term future actions via a new dense all-action loss, and capture user's short-term intention by directly learning from the real-time action sequences. We conducted both offline and online experiments to validate the performance of the new model architecture, and also address the challenge of serving such a complex model using mixed CPU/GPU setup in production. The proposed system has been deployed in production at Pinterest and has delivered significant online gains across organic and Ads applications. <|reference_end|>",
"<|reference_start|> Personalized Response Generation via Generative Split Memory Network: Despite the impressive successes of generation and dialogue systems, how to endow a text generation system with particular personality traits to deliver more personalized responses remains under-investigated. In this work, we look at how to generate personalized responses for questions on Reddit by utilizing personalized user profiles and posting histories. Specifically, we release an open-domain single-turn dialog dataset made up of 1.5M conversation pairs together with 300k profiles of users and related comments. We then propose a memory network to generate personalized responses in dialogue that utilizes a novel mechanism of splitting memories: one for user profile meta attributes and the other for user-generated information like comment histories. Experimental results show the quantitative and qualitative improvements of our simple split memory network model over the state-of-the-art response generation baselines. <|reference_end|>"
] | [
4,
8,
9,
15
] | {"<|cite_1|>": "ss-811494", "<|cite_2|>": "ss-2188615", "<|cite_3|>": "ss-811495", "<|multi_cite_4_1|>": "ss-811496", "<|multi_cite_4_2|>": "ss-1364032", "<|cite_5|>": "ss-868166", "<|cite_6|>": "arxiv-488837", "<|cite_7|>": "arxiv-499083", "<|multi_cite_8_1|>": "arxiv-504457", "<|multi_cite_8_2|>": "arxiv-446974", "<|multi_cite_8_3|>": "arxiv-495495", "<|multi_cite_9_1|>": "ss-811497", "<|multi_cite_9_2|>": "ss-1253935", "<|multi_cite_9_3|>": "arxiv-292438", "<|multi_cite_9_4|>": "ss-2479869", "<|multi_cite_9_5|>": "ss-2318530", "<|cite_10|>": "arxiv-499083"} |
2403.15306 | <|paper_start|> Title: HortiBot: An Adaptive Multi-Arm System for Robotic Horticulture of Sweet Peppers
Abstract: HortiBot: An Adaptive Multi-Arm System for Robotic Horticulture of Sweet Peppers: Horticultural tasks such as pruning and selective harvesting are labor intensive and horticultural staff are hard to find. Automating these tasks is challenging due to the semi-structured greenhouse workspaces, changing environmental conditions such as lighting, dense plant growth with many occlusions, and the need for gentle manipulation of non-rigid plant organs. In this work, we present the three-armed system HortiBot, with two arms for manipulation and a third arm as an articulated head for active perception using stereo cameras. Its perception system detects not only peppers, but also peduncles and stems in real time, and performs online data association to build a world model of pepper plants. Collision-aware online trajectory generation allows all three arms to safely track their respective targets for observation, grasping, and cutting. We integrated perception and manipulation to perform selective harvesting of peppers and evaluated the system in lab experiments. Using active perception coupled with end-effector force torque sensing for compliant manipulation, HortiBot achieves high success rates in our indoor pepper plant mock-up.
Introduction
\label{sec:intro}
Horticultural tasks such as pruning, thinning, pollination, and selective harvesting are labor-intensive and need to be carried out several times a season <|cite_start|> (Reference: {Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead: This review article analyzes state‐of‐the‐art and future perspectives for harvesting robots in high‐value crops. The objectives were to characterize the crop environment relevant for robotic harvesting, to perform a literature review on the state‐of‐the‐art of harvesting robots using quantitative measures, and to reflect on the crop environment and literature review to formulate challenges and directions for future research and development. Harvesting robots were reviewed regarding the crop harvested in a production environment, performance indicators, design process techniques used, hardware design decisions, and algorithm characteristics. On average, localization success was 85%, detachment success was 75%, harvest success was 66%, fruit damage was 5%, peduncle damage was 45%, and cycle time was 33 s. A kiwi harvesting robot achieved the shortest cycle time of 1 s. Moreover, the performance of harvesting robots did not improve in the past three decades, and none of these 50 robots was commercialized. Four future challenges with R&D directions were identified to realize a positive trend in performance and to successfully implement harvesting robots in practice: (1) simplifying the task, (2) enhancing the robot, (3) defining requirements and measuring performance, and (4) considering additional requirements for successful implementation. This review article may provide new directions for future automation projects in high‐value crops.) <|cite_end|>.
In contrast to the mechanization of large-scale grain and cereal farms, the automation of precision horticulture requires robots.
Robotic manipulation in horticulture presents several challenges due to semi-structured greenhouse workspaces, variations in environmental conditions such as lighting, complex and irregular plant structures, varying plant organ sizes and shapes, dense plant growth with many occlusions and obstacles, and the need for gentle manipulation of non-rigid plant organs <|cite_start|> (Reference: Selective Harvesting Robotics: Current Research, Trends, and Future Directions: ) <|cite_end|>.
While there is an extensive body of work focusing on fruit detection and localization, research on the full robotic harvesting pipeline is limited <|cite_start|> (Reference: Intelligent robots for fruit harvesting: recent developments and future challenges: ) <|cite_end|>.
Most selective harvesting systems use specialized hardware for manipulators and end-effectors <|cite_start|> (Reference: {Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead: This review article analyzes state‐of‐the‐art and future perspectives for harvesting robots in high‐value crops. The objectives were to characterize the crop environment relevant for robotic harvesting, to perform a literature review on the state‐of‐the‐art of harvesting robots using quantitative measures, and to reflect on the crop environment and literature review to formulate challenges and directions for future research and development. Harvesting robots were reviewed regarding the crop harvested in a production environment, performance indicators, design process techniques used, hardware design decisions, and algorithm characteristics. On average, localization success was 85%, detachment success was 75%, harvest success was 66%, fruit damage was 5%, peduncle damage was 45%, and cycle time was 33 s. A kiwi harvesting robot achieved the shortest cycle time of 1 s. Moreover, the performance of harvesting robots did not improve in the past three decades, and none of these 50 robots was commercialized. Four future challenges with R&D directions were identified to realize a positive trend in performance and to successfully implement harvesting robots in practice: (1) simplifying the task, (2) enhancing the robot, (3) defining requirements and measuring performance, and (4) considering additional requirements for successful implementation. This review article may provide new directions for future automation projects in high‐value crops.) <|cite_end|>.
In a recent review, Rajendran~\etal <|cite_start|> (Reference: Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control: This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.) <|cite_end|> suggest equipping selective harvesting robots with cooperative active and interactive perception for improved fruit detection and force sensing-enabled two-arm manipulation capabilities---to match humans in handling complex fruit clusters.
With humanoids having potential to become general-purpose autonomous workers adapting to different tasks <|cite_start|> (Reference: Advancements in humanoid robots: A comprehensive review and future prospects: This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview of the current state of humanoid robot research is presented. Furthermore, emerging challenges in the field are identified, emphasizing the necessity for a deeper understanding of biological motion mechanisms, improved structural design, enhanced material applications, advanced drive and control methods, and efficient energy utilization. The integration of bionics, brain-inspired intelligence, mechanics, and control is underscored as a promising direction for the development of advanced humanoid robotic systems. This paper serves as an invaluable resource, offering insightful guidance to researchers in the field, while contributing to the ongoing evolution and potential of humanoid robots across diverse domains.) <|cite_end|>, we aim to close the research gap in horticulture manipulation by proposing a non-specialized solution.
HortiBot is a three-arm system for active perception and dual-arm manipulation in horticulture.
The highly flexible robot is built from off-the-shelf components for multiple horticultural tasks.
Unlike most other works that focus on only vision, control, or motion planning, we present a fully integrated system. Our contributions include:
\begin{itemize}[leftmargin=3ex]
\item work space analysis and design of a three-arm system with stereo cameras and force-torque sensors,
\item visual perception of sweet pepper plants combining fruit instance mapping with a novel peduncle detection approach and stem detection,
\item online active perception during manipulation for refining of targeted pepper and peduncle localization,
\item dual-arm manipulation using parameterized motion primitives and collision-aware online trajectory generation, and
\item a thorough evaluation of the selective harvesting capabilities in lab experiments using real sweet peppers.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.97\columnwidth,clip,trim=200px 0px 00px 200px]{images/greenhouse_alpha.png}
\captionsetup{width=0.99\columnwidth, justification=justified}
\caption{HortiBot: A three-arm system with active perception and dual-arm manipulation for robotic horticulture. The right arm is used for grasping, the left arm performs cutting, and the central arm moves stereo cameras for mapping and online observation.}
\label{fig:cover}
\end{figure}
Related Work
\label{sec:related}
With advancements in robotics and deep learning methods, different aspects of horticulture have been automated using robotic systems such as pollination <|cite_start|> (Reference: Design of a lightweight robotic arm for kiwifruit pollination: ) <|cite_end|> and dormant pruning <|cite_start|> (Reference: Semiautonomous Precision Pruning of Upright Fruiting Offshoot Orchard Systems: An Integrated Approach: Dormant pruning is an important orchard activity for maintaining tree health and producing high-quality fruit. Due to decreasing worker availability, pruning is a prime candidate for robotics. However, pruning also represents a uniquely difficult problem, requiring robust systems for perception, pruning point determination, and manipulation that must operate under variable lighting conditions and in complex, highly unstructured environments. In this article, we introduce a system for pruning modern planar orchard architectures with simple pruning rules that combines various subsystems from our previous work on perception and manipulation. The integrated system demonstrates the ability to autonomously detect and cut pruning targets with minimal control of the environment, laying the groundwork for a fully autonomous system in the future. We validate the performance of our system through field trials in a sweet cherry orchard, ultimately achieving a cutting success rate of 58% across 10 trees. Though not fully robust and requiring improvements in throughput, our system is the first to operate on fruit trees and represents a useful base platform to be improved in the future.) <|cite_end|>.
Of the many tasks in the horticultural industry, selective harvesting is the one most often addressed by robotic solutions <|cite_start|> (Reference: Intelligent robots for fruit harvesting: recent developments and future challenges: ) <|cite_end|>.
The typical phases of selective harvesting are fruit detection and localization, end-effector motion planning, fruit attachment to the end-effector, fruit detachment from the plant, and transport to a storage container.
The surveys compiled over the years <|cite_start|> (Reference: {Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead: This review article analyzes state‐of‐the‐art and future perspectives for harvesting robots in high‐value crops. The objectives were to characterize the crop environment relevant for robotic harvesting, to perform a literature review on the state‐of‐the‐art of harvesting robots using quantitative measures, and to reflect on the crop environment and literature review to formulate challenges and directions for future research and development. Harvesting robots were reviewed regarding the crop harvested in a production environment, performance indicators, design process techniques used, hardware design decisions, and algorithm characteristics. On average, localization success was 85%, detachment success was 75%, harvest success was 66%, fruit damage was 5%, peduncle damage was 45%, and cycle time was 33 s. A kiwi harvesting robot achieved the shortest cycle time of 1 s. Moreover, the performance of harvesting robots did not improve in the past three decades, and none of these 50 robots was commercialized. Four future challenges with R&D directions were identified to realize a positive trend in performance and to successfully implement harvesting robots in practice: (1) simplifying the task, (2) enhancing the robot, (3) defining requirements and measuring performance, and (4) considering additional requirements for successful implementation. This review article may provide new directions for future automation projects in high‐value crops.) <|cite_end|> <|cite_start|> (Reference: Selective Harvesting Robotics: Current Research, Trends, and Future Directions: ) <|cite_end|> <|cite_start|> (Reference: Intelligent robots for fruit harvesting: recent developments and future challenges: ) <|cite_end|> <|cite_start|> (Reference: Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control: This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.) <|cite_end|> show that while substantial progress has been made in fruit detection and robotic hardware customization, the harvesting systems are still not ready for commercialization due to low success rates and high cycle times.
Whereas most attempts at autonomous harvesting have focused on citrus fruits or apples due to sparse foliage and easier localization, there have been only three reported attempts on the development of a full pipeline for sweet pepper harvesting: CROPS <|cite_start|> (Reference: {Performance evaluation of a harvesting robot for sweet pepper: This paper evaluates a robot developed for autonomous harvesting of sweet peppers in a commercial greenhouse. Objectives were to assess robot performance under unmodified and simplified crop conditions, using two types of end effectors (Fin Ray; Lip type), and to evaluate the performance contribution of stem‐dependent determination of the grasp pose. We describe and discuss the performance of hardware and software components developed for fruit harvesting in a complex environment that includes lighting variation, occlusions, and densely spaced obstacles. After simplifying the crop, harvest success significantly improved from 6% to 26% (Fin Ray) and from 2% to 33% (Lip type). We observed a decrease in stem damage and an increase in grasp success after enabling stem‐dependent determination of the grasp pose. Generally, the robot had difficulty in successfully picking sweet peppers and we discuss possible causes. The robot's novel capability of perceiving the stem of a plant may serve as useful functionality for future robots.) <|cite_end|>, Harvey <|cite_start|> (Reference: Performance improvements of a sweet pepper harvesting robot in protected cropping environments: Using robots to harvest sweet peppers in protected cropping environments has remained unsolved despite considerable effort by the research community over several decades. In this paper, we present the robotic harvester, Harvey, designed for sweet peppers in protected cropping environments that achieved a 76.5% success rate on 68 fruit (within a modified scenario) which improves upon our prior work which achieved 58% on 24 fruit and related sweet pepper harvesting work which achieved 33% on 39 fruit (for their best tool in a modified scenario). This improvement was primarily achieved through the introduction of a novel peduncle segmentation system using an efficient deep convolutional neural network, in conjunction with three‐dimensional postfiltering to detect the critical cutting location. We benchmark the peduncle segmentation against prior art demonstrating an improvement in performance with a F1 score of 0.564 compared to 0.302. The robotic harvester uses a perception pipeline to detect a target sweet pepper and an appropriate grasp and cutting pose used to determine the trajectory of a multimodal harvesting tool to grasp the sweet pepper and cut it from the plant. A novel decoupling mechanism enables the gripping and cutting operations to be performed independently. We perform an in‐depth analysis of the full robotic harvesting system to highlight bottlenecks and failure points that future work could address.) <|cite_end|>, and SWEEPER <|cite_start|> (Reference: Development of a sweet pepper harvesting robot: This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB‐D camera, high‐end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end‐user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4‐week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.) <|cite_end|>.
Sweet peppers are among the most difficult crops to autonomously harvest due to variation in shape and size, and severe occlusions by leaves leading to failures in both pepper and peduncle localization <|cite_start|> (Reference: {Performance evaluation of a harvesting robot for sweet pepper: This paper evaluates a robot developed for autonomous harvesting of sweet peppers in a commercial greenhouse. Objectives were to assess robot performance under unmodified and simplified crop conditions, using two types of end effectors (Fin Ray; Lip type), and to evaluate the performance contribution of stem‐dependent determination of the grasp pose. We describe and discuss the performance of hardware and software components developed for fruit harvesting in a complex environment that includes lighting variation, occlusions, and densely spaced obstacles. After simplifying the crop, harvest success significantly improved from 6% to 26% (Fin Ray) and from 2% to 33% (Lip type). We observed a decrease in stem damage and an increase in grasp success after enabling stem‐dependent determination of the grasp pose. Generally, the robot had difficulty in successfully picking sweet peppers and we discuss possible causes. The robot's novel capability of perceiving the stem of a plant may serve as useful functionality for future robots.) <|cite_end|>.
In CROPS <|cite_start|> (Reference: {Performance evaluation of a harvesting robot for sweet pepper: This paper evaluates a robot developed for autonomous harvesting of sweet peppers in a commercial greenhouse. Objectives were to assess robot performance under unmodified and simplified crop conditions, using two types of end effectors (Fin Ray; Lip type), and to evaluate the performance contribution of stem‐dependent determination of the grasp pose. We describe and discuss the performance of hardware and software components developed for fruit harvesting in a complex environment that includes lighting variation, occlusions, and densely spaced obstacles. After simplifying the crop, harvest success significantly improved from 6% to 26% (Fin Ray) and from 2% to 33% (Lip type). We observed a decrease in stem damage and an increase in grasp success after enabling stem‐dependent determination of the grasp pose. Generally, the robot had difficulty in successfully picking sweet peppers and we discuss possible causes. The robot's novel capability of perceiving the stem of a plant may serve as useful functionality for future robots.) <|cite_end|>, the focus of the research was on end-effector design, with color-based pepper detection and time of flight measurement for 3D localization.
Bac\etal <|cite_start|> (Reference: {Performance evaluation of a harvesting robot for sweet pepper: This paper evaluates a robot developed for autonomous harvesting of sweet peppers in a commercial greenhouse. Objectives were to assess robot performance under unmodified and simplified crop conditions, using two types of end effectors (Fin Ray; Lip type), and to evaluate the performance contribution of stem‐dependent determination of the grasp pose. We describe and discuss the performance of hardware and software components developed for fruit harvesting in a complex environment that includes lighting variation, occlusions, and densely spaced obstacles. After simplifying the crop, harvest success significantly improved from 6% to 26% (Fin Ray) and from 2% to 33% (Lip type). We observed a decrease in stem damage and an increase in grasp success after enabling stem‐dependent determination of the grasp pose. Generally, the robot had difficulty in successfully picking sweet peppers and we discuss possible causes. The robot's novel capability of perceiving the stem of a plant may serve as useful functionality for future robots.) <|cite_end|> also developed a stem-dependent grasp pose calculation.
However, neither sweet pepper pose estimation nor peduncle localization was the focus of this work, which led to low success rates and high cycle times.
In Harvey <|cite_start|> (Reference: Performance improvements of a sweet pepper harvesting robot in protected cropping environments: Using robots to harvest sweet peppers in protected cropping environments has remained unsolved despite considerable effort by the research community over several decades. In this paper, we present the robotic harvester, Harvey, designed for sweet peppers in protected cropping environments that achieved a 76.5% success rate on 68 fruit (within a modified scenario) which improves upon our prior work which achieved 58% on 24 fruit and related sweet pepper harvesting work which achieved 33% on 39 fruit (for their best tool in a modified scenario). This improvement was primarily achieved through the introduction of a novel peduncle segmentation system using an efficient deep convolutional neural network, in conjunction with three‐dimensional postfiltering to detect the critical cutting location. We benchmark the peduncle segmentation against prior art demonstrating an improvement in performance with a F1 score of 0.564 compared to 0.302. The robotic harvester uses a perception pipeline to detect a target sweet pepper and an appropriate grasp and cutting pose used to determine the trajectory of a multimodal harvesting tool to grasp the sweet pepper and cut it from the plant. A novel decoupling mechanism enables the gripping and cutting operations to be performed independently. We perform an in‐depth analysis of the full robotic harvesting system to highlight bottlenecks and failure points that future work could address.) <|cite_end|>, a sweet pepper pose estimation and grasping algorithm <|cite_start|> (Reference: Sweet pepper pose detection and grasping for automated crop harvesting: This paper presents a method for estimating the 6DOF pose of sweet-pepper (capsicum) crops for autonomous harvesting via a robotic manipulator. The method uses the Kinect Fusion algorithm to robustly fuse RGB-D data from an eye-in-hand camera combined with a colour segmentation and clustering step to extract an accurate representation of the crop. The 6DOF pose of the sweet peppers is then estimated via a nonlinear least squares optimisation by fitting a superellipsoid to the segmented sweet pepper. The performance of the method is demonstrated on a real 6DOF manipulator with a custom gripper. The method is shown to estimate the 6DOF pose successfully enabling the manipulator to grasp sweet peppers for a range of different orientations. The results obtained improve largely on the performance of grasping when compared to a naive approach, which does not estimate the orientation of the crop.) <|cite_end|> together with MiniInception <|cite_start|> (Reference: Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics: We propose a novel approach for training deep convolutional neural networks (DCNNs) that allows us to tradeoff complexity and accuracy to learn lightweight models suitable for robotic platforms such as AgBot II (which performs automated weed management). Our approach consists of three stages, the first is to adapt a pre-trained model to the task at hand. This provides state-of-the-art performance but at the cost of high computational complexity resulting in a low frame rate of just 0.12 frames per second (fps). Second, we use the adapted model and employ model compression techniques to learn a lightweight DCNN that is less accurate but has two orders of magnitude fewer parameters. Third, $K$ lightweight models are combined as a mixture model to further enhance the performance of the lightweight models. Applied to the challenging task of weed segmentation, we improve the accuracy from 85.9%, using a traditional approach, to 93.9% by adapting a complicated pre-trained DCNN with 25M parameters (Inception-v3). The downside to this adapted model, Adapted-IV3, is that it can only process 0.12 fps. To make this approach fast while still retaining accuracy, we learn lightweight DCNNs which when combined can achieve accuracy greater than 90% while using considerably fewer parameters capable of processing between 1.07 and 1.83 fps, up to an order of magnitude faster and up to an order of magnitude fewer parameters.) <|cite_end|>, a mixture of lightweight CNN approach for peduncle segmentation, was deployed to improve the harvesting performance.
However, the peduncle localization accuracy is still limited with an F1-score of 0.502 and led to detachment failures.
Furthermore, Harvey used a customized end-effector with a suction cup and did not focus on motion planning for crop damage avoidance, or active perception.
Arad\etal <|cite_start|> (Reference: Development of a sweet pepper harvesting robot: This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB‐D camera, high‐end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end‐user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4‐week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.) <|cite_end|> focused on finding the best fit crop conditions and on testing\,\&\,validation of SWEEPER in a commercial glasshouse.
Semantic segmentation-based fruit and stem detection were deployed on a 6-DoF industrial robot arm with a customized end-effector, which caught the fruit after harvesting.
Due to the lack of peduncle localization, cutting failures were reported.
To the best of our knowledge, HortiBot is the first attempt at selective harvesting in general, and sweet peppers in particular, that focuses on all the aspects of harvesting: fruit detection and peduncle localization, active perception, environment-aware motion planning and force sensing-enabled adaptive manipulation.
HortiBot is a general-purpose system that can also be used for other horticulture operations such as leaf pruning and pollination. <|paper_end|> | [
"<|reference_start|> {Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead: This review article analyzes state‐of‐the‐art and future perspectives for harvesting robots in high‐value crops. The objectives were to characterize the crop environment relevant for robotic harvesting, to perform a literature review on the state‐of‐the‐art of harvesting robots using quantitative measures, and to reflect on the crop environment and literature review to formulate challenges and directions for future research and development. Harvesting robots were reviewed regarding the crop harvested in a production environment, performance indicators, design process techniques used, hardware design decisions, and algorithm characteristics. On average, localization success was 85%, detachment success was 75%, harvest success was 66%, fruit damage was 5%, peduncle damage was 45%, and cycle time was 33 s. A kiwi harvesting robot achieved the shortest cycle time of 1 s. Moreover, the performance of harvesting robots did not improve in the past three decades, and none of these 50 robots was commercialized. Four future challenges with R&D directions were identified to realize a positive trend in performance and to successfully implement harvesting robots in practice: (1) simplifying the task, (2) enhancing the robot, (3) defining requirements and measuring performance, and (4) considering additional requirements for successful implementation. This review article may provide new directions for future automation projects in high‐value crops. <|reference_end|>",
"<|reference_start|> Selective Harvesting Robotics: Current Research, Trends, and Future Directions: <|reference_end|>",
"<|reference_start|> {Performance evaluation of a harvesting robot for sweet pepper: This paper evaluates a robot developed for autonomous harvesting of sweet peppers in a commercial greenhouse. Objectives were to assess robot performance under unmodified and simplified crop conditions, using two types of end effectors (Fin Ray; Lip type), and to evaluate the performance contribution of stem‐dependent determination of the grasp pose. We describe and discuss the performance of hardware and software components developed for fruit harvesting in a complex environment that includes lighting variation, occlusions, and densely spaced obstacles. After simplifying the crop, harvest success significantly improved from 6% to 26% (Fin Ray) and from 2% to 33% (Lip type). We observed a decrease in stem damage and an increase in grasp success after enabling stem‐dependent determination of the grasp pose. Generally, the robot had difficulty in successfully picking sweet peppers and we discuss possible causes. The robot's novel capability of perceiving the stem of a plant may serve as useful functionality for future robots. <|reference_end|>",
"<|reference_start|> Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics: We propose a novel approach for training deep convolutional neural networks (DCNNs) that allows us to tradeoff complexity and accuracy to learn lightweight models suitable for robotic platforms such as AgBot II (which performs automated weed management). Our approach consists of three stages, the first is to adapt a pre-trained model to the task at hand. This provides state-of-the-art performance but at the cost of high computational complexity resulting in a low frame rate of just 0.12 frames per second (fps). Second, we use the adapted model and employ model compression techniques to learn a lightweight DCNN that is less accurate but has two orders of magnitude fewer parameters. Third, $K$ lightweight models are combined as a mixture model to further enhance the performance of the lightweight models. Applied to the challenging task of weed segmentation, we improve the accuracy from 85.9%, using a traditional approach, to 93.9% by adapting a complicated pre-trained DCNN with 25M parameters (Inception-v3). The downside to this adapted model, Adapted-IV3, is that it can only process 0.12 fps. To make this approach fast while still retaining accuracy, we learn lightweight DCNNs which when combined can achieve accuracy greater than 90% while using considerably fewer parameters capable of processing between 1.07 and 1.83 fps, up to an order of magnitude faster and up to an order of magnitude fewer parameters. <|reference_end|>"
] | [
9,
10,
17,
21
] | {"<|cite_1|>": "ss-1013716", "<|cite_2|>": "ss-686769", "<|cite_3|>": "ss-915413", "<|cite_4|>": "ss-1013716", "<|cite_5|>": "arxiv-498210", "<|cite_6|>": "ss-876927", "<|cite_7|>": "ss-880556", "<|cite_8|>": "ss-1245856", "<|cite_9|>": "ss-915413", "<|multi_cite_10_1|>": "ss-1013716", "<|multi_cite_10_2|>": "ss-686769", "<|multi_cite_10_3|>": "ss-915413", "<|multi_cite_10_4|>": "arxiv-498210", "<|cite_11|>": "ss-730293", "<|cite_12|>": "ss-1961425", "<|cite_13|>": "ss-1961426", "<|cite_14|>": "ss-730293", "<|cite_15|>": "ss-730293", "<|cite_16|>": "ss-730293", "<|cite_17|>": "ss-1961425", "<|cite_18|>": "ss-932073", "<|cite_19|>": "ss-1574122", "<|cite_20|>": "ss-1961426"} |
1909.03567 | <|paper_start|> Title: What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring
Abstract: What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring: Although systematic biases in decision-making are widely documented, the ways in which they emerge from different sources is less understood. We present a controlled experimental platform to study gender bias in hiring by decoupling the effect of world distribution (the gender breakdown of candidates in a specific profession) from bias in human decision-making. We explore the effectiveness of \textit{representation criteria}, fixed proportional display of candidates, as an intervention strategy for mitigation of gender bias by conducting experiments measuring human decision-makers' rankings for who they would recommend as potential hires. Experiments across professions with varying gender proportions show that balancing gender representation in candidate slates can correct biases for some professions where the world distribution is skewed, although doing so has no impact on other professions where human persistent preferences are at play. We show that the gender of the decision-maker, complexity of the decision-making task and over- and under-representation of genders in the candidate slate can all impact the final decision. By decoupling sources of bias, we can better isolate strategies for bias mitigation in human-in-the-loop systems.
Introduction
\noindent Machine learning can aid decision-making and is used in recommendation systems that play increasingly prevalent roles in the world. We now deploy systems to help hire candidates, determine who to police more <|cite_start|> (Reference: Fairness and accountability design needs for algorithmic support in high stakes public sector decision-making: Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions-like taxation, justice, and child protection-are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning-absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the 'street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.) <|cite_end|>, and assess the likelihood of an individual to recidivate on a crime <|cite_start|> (Reference: False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks": PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claim ing that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The report heavily implied that such bias is inherent in all actuarial risk assessment instruments (ARAIs). We think ProPublica’s report was based on faulty statistics and data analysis, and that the report failed to show that the COMPAS itself is racially biased, let alone that other risk instruments are biased. Not only do ProPublica’s results contradict several com prehensive existing studies concluding that actuarial risk can be predicted free of racial) <|cite_end|>. Because these systems are trained on real world data, they often produce biased decision outcomes in a manner that is discriminatory against underrepresented groups. Systems have been found to unfairly discriminate against defendants of color in assessing bail <|cite_start|> (Reference: False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks": PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claim ing that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The report heavily implied that such bias is inherent in all actuarial risk assessment instruments (ARAIs). We think ProPublica’s report was based on faulty statistics and data analysis, and that the report failed to show that the COMPAS itself is racially biased, let alone that other risk instruments are biased. Not only do ProPublica’s results contradict several com prehensive existing studies concluding that actuarial risk can be predicted free of racial) <|cite_end|>, incorrectly classify minority groups in facial recognition tasks <|cite_start|> (Reference: {Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products: Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits, as scholarship on the impact of algorithmic audits on increasing algorithmic fairness and transparency in commercial systems is nascent. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender and skin type performance disparities in commercial facial analysis models. This paper 1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, 2) presents new performance metrics from targeted companies IBM, Microsoft and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, 3) provides performance results on PPB by non-target companies Amazon and Kairos and, 4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new API versions. All targets reduced accuracy disparities between males and females and darker and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup, that underwent a 17.7% - 30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72% to 8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively.) <|cite_end|>, and engage in wage theft for honest workers <|cite_start|> (Reference: Taking a hit: Designing around rejection, mistrust, risk, and workers' experiences in amazon mechanical turk: Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers' perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. Unfair rejections can result from poorly-designed tasks, unclear instructions, technical errors, and malicious Requesters. Because the AMT policy and platform provide little recourse to Turkers, they adopt strategies to minimize risk: avoiding new and known bad Requesters, sharing information with other Turkers, and choosing low-risk tasks. Through a series of ideas inspired by these findings-including notifying Turkers and Requesters of a broken task, returning rejected work to Turkers for repair, and providing collective dispute resolution mechanisms-we argue that making reducing risk and building trust a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.) <|cite_end|>. While much of the algorithmic fairness literature has focused on understanding bias from algorithms in isolation <|cite_start|> (Reference: Group fairness under composition: We examine the composition properties of a large class of statistical group fairness definitions. One corollary of our mostly negative results is that the group intersection techniques proposed by [Kearns, Neel, Roth and Wu 2017] and [Hebert-Johnson, Kim, Reingold and Rothblum 2017] may degrade under composition. We show several cases where group fairness definitions give mis-leading signals under composition and conclude that additional context is needed to evaluate group fairness under composition.) <|cite_end|>, little is known about how human decisions, when interacting with these systems, are impacted <|cite_start|> (Reference: Disparate interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments: Despite vigorous debates about the technical characteristics of risk assessments being deployed in the U.S. criminal justice system, remarkably little research has studied how these tools affect actual decision-making processes. After all, risk assessments do not make definitive decisions---they inform judges, who are the final arbiters. It is therefore essential that considerations of risk assessments be informed by rigorous studies of how judges actually interpret and use them. This paper takes a first step toward such research on human interactions with risk assessments through a controlled experimental study on Amazon Mechanical Turk. We found several behaviors that call into question the supposed efficacy and fairness of risk assessments: our study participants 1) underperformed the risk assessment even when presented with its predictions, 2) could not effectively evaluate the accuracy of their own or the risk assessment's predictions, and 3) exhibited behaviors fraught with "disparate interactions," whereby the use of risk assessments led to higher risk predictions about black defendants and lower risk predictions about white defendants. These results suggest the need for a new "algorithm-in-the-loop" framework that places machine learning decision-making aids into the sociotechnical context of improving human decisions rather than the technical context of generating the best prediction in the abstract. If risk assessments are to be used at all, they must be grounded in rigorous evaluations of their real-world impacts instead of in their theoretical potential.) <|cite_end|>.
In this paper, we study algorithmic decision-making in hiring. While there exists a long history of studying hiring discrimination in fields like economics and social psychology, there has been little work done on the interplay between these demonstrated biases and the influence of algorithmic systems, especially across a wide variation of different professions and study participants. Here, we conduct a large-scale experiment studying \textit{screening} or \textit{recommendation systems} that choose a person (or set of people) from a pool of candidates <|cite_start|> (Reference: Discrimination in the Age of Algorithms: The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.) <|cite_end|>. Figure 1 shows the process flow for these hybrid systems, also known as human-in-the-loop or algorithm-in-the-loop systems. Data is gathered from the existing hiring pools in the world, often in larger quantities than what a human can feasibly assess, and fed into an algorithm which then screens for a candidate slate. The human decision-maker utilizes this filtered list to produce the final decision on whether or not a candidate should be recommended for hire. Biased gender hiring recommendations can be due to many different sources, including from (1) the world distribution (gender breakdown of candidates in a specific professions) (2) algorithmic bias in what's displayed to the human and (3) human decision-making itself.
\begin{figure}[t]
\centering
\includegraphics[width=.95\columnwidth]{Figure1v3.png}
\caption{A high-level schematic for a hybrid system for hiring. A biased decision can be impacted by world, algorithmic, and human bias. Representation criteria is an intervention deployed when the candidate slate is generated.}
\end{figure}
\noindent {\bfseries Approach}
This study seeks to understand the impact of different sources of bias on hiring, specifically what properties of the candidate slate impact human decision-making decisions. We define a biased decision to mean any difference in the system outcome such that one gender is favored over another in a manner that does not correspond to the gender distribution of the candidate slate as fed into the system. For example, imagine that two systems, A and B, both input the same distribution (50-50 M-F candidates). If System A produces outcomes that result in a 50-50 distribution of M-F recommended for hire but System B does not, then System B would be a biased system. We utilize the same reasoning for humans in assessing output decisions with respect to the input distribution that may come from the world (in traditional hiring practices) or from an algorithm (in hybrid systems). This view ascribes no notion of fairness or justice to the decision outcome.
We conduct experiments using Amazon Mechanical Turk to study how participants recommend candidates for different professions given candidate slates where we control factors such as education and experience, and artificially assign distributions of female and male candidates (see Figure 2). We generate profiles where we replace the names and pronouns displayed in the text and hold all other factors constant. Using representation criteria (i.e., fixed proportional display of gender distribution on the candidate slate), we randomly assign gender and ordering of profiles on each individual task to study how hiring decisions vary across different professions and gender distributions. This design affords us the following advantage: for any candidate slate, we can ask ``how would the decision outcome be different if the particular gender of the candidates were changed?''. We ask participants to rank, out of 8 total, their top 4 candidates to recommend to a friend. We vary the proportion of M-F candidates in each profession and observe the impact of representation criteria on hiring outcomes. By displaying the specific gender distributions displayed to a human decision-maker as fixed controls, we can attribute any observed disparity in hiring outcomes as bias linked to human decision-making, rather than algorithmic or world distribution biases. We compare these to baseline outcomes of the study conducted on the current world distribution of each profession and an AI model trained on word embeddings, both which represent systems that do not take into account representation criteria. We ask the following questions:
\begin{enumerate}
\item Does balancing the gender distribution in candidate slates mitigate bias? How does this effect vary across different professions?
\item In professions where this intervention is not enough, does over-representation help?
\item How do personal features of the decision-maker, such as gender, impact the outcome?
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=.95\columnwidth]{intervention_flow.png}
\caption{Our experimental design, highlighting the different candidate slates that can be displayed to the decision-maker. Representation criteria represent the intervention tested and world baselines highlight examples of existing processes.}
\end{figure}
\noindent {\bfseries Takeaways}
Our results suggest three key takeaways regarding the relationship between world distribution bias and human decision-making. First, when genders are balanced in the candidate slate, gender bias in many professions with skewed world distributions can be mitigated. However, there are some professions in which this intervention is not enough to completely correct bias. Second, for professions like nannies and OBGYNs, no adjustment of representation criteria, even to the point of extreme over-representation, can fully correct for biased outcomes, suggesting that there are {\em persistent preferences} at play with respect to which genders people prefer for specific jobs that are independent of how candidates are displayed. Third, even across the same profession, there are personal features of the decision-maker, such as gender, that impact both the direction and strength of decision bias. As we seek to understand more regarding how algorithms can be deployed safely and \textit{fairly} in the real-world, we must also study how bias from algorithms impacts decision-making by understanding the effect of different sources of bias on decision outcomes. <|paper_end|> | [
"<|reference_start|> False Positives, False Negatives, and False Analyses: A Rejoinder to \"Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks\": PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claim ing that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The report heavily implied that such bias is inherent in all actuarial risk assessment instruments (ARAIs). We think ProPublica’s report was based on faulty statistics and data analysis, and that the report failed to show that the COMPAS itself is racially biased, let alone that other risk instruments are biased. Not only do ProPublica’s results contradict several com prehensive existing studies concluding that actuarial risk can be predicted free of racial <|reference_end|>",
"<|reference_start|> False Positives, False Negatives, and False Analyses: A Rejoinder to \"Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks\": PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claim ing that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The report heavily implied that such bias is inherent in all actuarial risk assessment instruments (ARAIs). We think ProPublica’s report was based on faulty statistics and data analysis, and that the report failed to show that the COMPAS itself is racially biased, let alone that other risk instruments are biased. Not only do ProPublica’s results contradict several com prehensive existing studies concluding that actuarial risk can be predicted free of racial <|reference_end|>",
"<|reference_start|> {Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products: Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits, as scholarship on the impact of algorithmic audits on increasing algorithmic fairness and transparency in commercial systems is nascent. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender and skin type performance disparities in commercial facial analysis models. This paper 1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, 2) presents new performance metrics from targeted companies IBM, Microsoft and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, 3) provides performance results on PPB by non-target companies Amazon and Kairos and, 4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new API versions. All targets reduced accuracy disparities between males and females and darker and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup, that underwent a 17.7% - 30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72% to 8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively. <|reference_end|>",
"<|reference_start|> Discrimination in the Age of Algorithms: The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity. <|reference_end|>"
] | [
1,
2,
3,
7
] | {"<|cite_2|>": "ss-1458071", "<|cite_3|>": "ss-687307", "<|cite_4|>": "ss-687307", "<|cite_5|>": "ss-772601", "<|cite_6|>": "ss-1080112", "<|cite_7|>": "ss-1260976", "<|cite_8|>": "ss-947437", "<|cite_9|>": "arxiv-191029"} |
2008.10418 | <|paper_start|> Title: INSIDE: Steering Spatial Attention with Non-Imaging Information in CNNs
Abstract: INSIDE: Steering Spatial Attention with Non-Imaging Information in CNNs: We consider the problem of integrating non-imaging information into segmentation networks to improve performance. Conditioning layers such as FiLM provide the means to selectively amplify or suppress the contribution of different feature maps in a linear fashion. However, spatial dependency is difficult to learn within a convolutional paradigm. In this paper, we propose a mechanism to allow for spatial localisation conditioned on non-imaging information, using a feature-wise attention mechanism comprising a differentiable parametrised function (e.g. Gaussian), prior to applying the feature-wise modulation. We name our method INstance modulation with SpatIal DEpendency (INSIDE). The conditioning information might comprise any factors that relate to spatial or spatio-temporal information such as lesion location, size, and cardiac cycle phase. Our method can be trained end-to-end and does not require additional supervision. We evaluate the method on two datasets: a new CLEVR-Seg dataset where we segment objects based on location, and the ACDC dataset conditioned on cardiac phase and slice location within the volume. Code and the CLEVR-Seg dataset are available at https://github.com/jacenkow/inside.
Introduction
Acquisition of medical images often involves capturing non-imaging information
such as image and patient metadata which are a source of valuable information
yet are frequently disregarded in automatic segmentation and classification. The
useful information should expose correlation with the task such as body mass
index (BMI) with ventricular volume <|cite_start|> (Reference: Automated cardiovascular magnetic resonance image analysis with fully convolutional networks: Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV). By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance on par with human experts in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images.) <|cite_end|>, or symptom
laterality with stroke lesion laterality <|cite_start|> (Reference: Conjugate eye deviation in acute intracerebral hemorrhage: stroke acute management with urgent risk-factor assessment and improvement--ICH (SAMURAI-ICH) study: Background and Purpose— Conjugate eye deviation (CED) occurs frequently in patients with acute stroke. The purpose of this study was to elucidate the factors that correlate with CED as well as the relationship between CED and outcomes in patients with acute intracerebral hemorrhage. Methods— A total of 211 patients with acute supratentorial intracerebral hemorrhage were recruited in a multicenter, prospective study. CED was assessed with a National Institutes of Health Stroke Scale “best gaze” subscore of ≥1. Hematoma location and volume were assessed on CT. Results— Forty-five percent of the patients had CED. On multivariable analysis, right-sided lesion (OR, 2.36; 95% CI, 1.18–4.93), hematoma volume (OR, 1.07; 95% CI, 1.04–1.10 per 1 mL), and baseline Glasgow Coma Scale score (OR, 0.66; 95% CI, 0.53–0.80 per 1 point) were independently associated with CED. After adjusting for sex, age, intraventricular extension of the hematoma, baseline Glasgow Coma Scale score, and hematoma volume, the presence of CED both on admission and 72 hours later was an independent predictor of death or dependency at 3 months poststroke (OR, 5.77; 95% CI, 2.27–16.94). The optimal cutoff volume of hematoma related to CED was ≥13.5 mL for patients with putaminal hemorrhage (sensitivity, 76%; specificity, 72%) and ≥7.7 mL for patients with thalamic hemorrhage (sensitivity, 82%; specificity, 83%). Conclusions— The persistence of CED was a significant predictor of death or dependency after acute supratentorial intracerebral hemorrhage even after adjusting for initial severity and hematoma volume. CED can be evoked by a relatively smaller thalamic hematoma than a putaminal hematoma.) <|cite_end|>, and these
correlations can be exploited to improve the quality of the structure
segmentation. Nevertheless, combining both imaging and non-imaging information in
the medical domain remains challenging, with dedicated workshops to approach
this problem.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/method.pdf}
\caption{Visualisation of the method. Given a feature map $F_c$ and conditioning
vector $\tilde{z}$, the method first applies spatial attention ($a$) with
scale ($\gamma$) and shift ($\beta$) factors to $F_c$ respectively.
The attention matrix ($a$) is the product of two Gaussian vectors
($a_h, a_w$). Therefore, for a single feature map, the auxiliary network
predicts six parameters ($\gamma$, $\beta$, $\mu_h$, $\sigma_h$, $\mu_w$,
$\sigma_w$). We denote Hadamard product with $\odot$ symbol.}
\label{fig:method}
\end{figure}
Conditioning layers have become the dominant method to tackle this challenge,
finding application in image synthesis <|cite_start|> (Reference: Large Scale GAN Training for High Fidelity Natural Image Synthesis: Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.) <|cite_end|>, style
transfer <|cite_start|> (Reference: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization: Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.) <|cite_end|> and visual question answering
(VQA) <|cite_start|> (Reference: FiLM: Visual Reasoning with a General Conditioning Layer: We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.) <|cite_end|>. In this setup, the network is conditioned on
non-imaging information via a learned set of scalar weights which affinely
transform feature maps to selectively amplify or suppress each feature, thus
controlling its contribution to the final prediction. However, this method has
limited capability to adjust channels spatially, and is less suited to
conditioning on information relating to spatial or spatio-temporal prior
knowledge. Consider a problem where we expect to produce a segmentation only on
one side of the image (left or right) indicated by the laterality of the
patient's symptoms. To accomplish this task, the network would require to learn
how to encode relative spatial relationships and split them into channels. We
show that spatial conditioning can be challenging and propose a method to
overcome this limitation.
We present a new conditioning layer which uses non-imaging information to steer
spatial attention before applying the affine transformation. We choose a
Gaussian for the attention mechanism due to its parameter-efficiency, allowing
us to learn a separate attention per channel. However, other differentiable
functions can also be used. We first test our method on a simulated dataset, our
extension of the CLEVR\footnote{Diagnostic Dataset for Compositional Language
and Elementary Visual Reasoning} dataset <|cite_start|> (Reference: CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning: When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.) <|cite_end|>, where we
segment objects based on their location within the image space. To prove the
method is applicable in a clinical setting, we use the ACDC\footnote{Automated
Cardiac Diagnosis Challenge (ACDC), MICCAI Challenge
2017} dataset <|cite_start|> (Reference: Deep learning techniques for automatic {MRI} cardiac multi-structures segmentation and diagnosis: is the problem solved?: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.) <|cite_end|> with the task to segment anatomical
structures from cardiac cine-MR images. We perform 2D segmentation, and provide
slice position and cardiac cycle phase as the non-imaging information to our
method.
\textbf{Contributions}: \textbf{(1)} we propose a new
conditioning layer capable of handling spatial and spatio-temporal dependency
given a conditioning variable; \textbf{(2)} we extend the CLEVR dataset for
segmentation tasks and several conditioning scenarios, such as shape-,
colour-, or size-based conditioning in the segmentation space;
\textbf{(3)} we evaluate different conditioning layers for the task of
segmentation on the CLEVR-Seg and ACDC datasets.
Related Work
\label{sec:related}
An early work on adapting batch normalisation for conditioning was in style
transfer. The conditional instance normalisation
layer <|cite_start|> (Reference: A Learned Representation For Artistic Style: The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.) <|cite_end|> (Eq.~\ref{eq:instance}) applied a pair of scale
($\gamma_s$) and shift ($\beta_s$) vectors from the style-dependent parameter
matrices, where each pair corresponded to a single style $s$ such as Claude
Monet or Edvard Munch. This allowed several styles to be learned using a single
network and proved that affine transformations were sufficient for the task.
However, the method is restricted to the discrete set of styles seen during
training. In Adaptive Instance Normalisation (AdaIN) <|cite_start|> (Reference: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization: Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.) <|cite_end|>,
the authors proposed to instead use a network to predict the style-dependent
vectors (as in hypernetworks), allowing parameters to be predicted for arbitrary
new styles at inference time.
\begin{equation}
z = \gamma_s \frac{x - \mu_x}{\sigma_x} + \beta_s
\label{eq:instance}
\end{equation}
AdaIN has been applied outside of the style transfer domain, for instance to
image synthesis using face landmarks where the method is used to inpaint the
landmark with face texture <|cite_start|> (Reference: Few-Shot Adversarial Learning of Realistic Neural Talking Head Models: Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.) <|cite_end|>, and to conditional object
segmentation given its coordinates <|cite_start|> (Reference: AdaptIS: Adaptive Instance Selection Network: We present Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point $(x, y)$, it generates a mask for the object located at $(x, y)$. The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. AdaptIS can be easily combined with standard semantic segmentation pipeline to perform panoptic segmentation. To illustrate the idea, we perform experiments on a challenging toy problem with difficult occlusions. Then we extensively evaluate the method on panoptic segmentation benchmarks. We obtain state-of-the-art results on Cityscapes and Mapillary even without pretraining on COCO, and show competitive results on a challenging COCO dataset. The source code of the method and the trained models are available at https://github.com/saic-vul/adaptis.) <|cite_end|>. A similar method
to AdaIN was applied to visual question-answering (VQA); the authors used
feature-wise linear modulation layer (FiLM) <|cite_start|> (Reference: FiLM: Visual Reasoning with a General Conditioning Layer: We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.) <|cite_end|> to condition
the network with questions. FiLM is identical to AdaIN but omits the instance
\emph{normalisation} step ($\mu_x, \sigma_x$ in Eq.~\ref{eq:instance}), which
the authors found to be unnecessary. FiLM has found application in medical image
analysis for disentangled representation
learning <|cite_start|> (Reference: Disentangled Representation Learning in Cardiac Image Analysis: Typically, a medical image offers spatial information on the anatomy (and pathology) modulated by imaging specific characteristics. Many imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can be interpreted in this way. We can venture further and consider that a medical image naturally factors into some spatial factors depicting anatomy and factors that denote the imaging characteristics. Here, we explicitly learn this decomposed (disentangled) representation of imaging data, focusing in particular on cardiac images. We propose Spatial Decomposition Network (SDNet), which factorises 2D medical images into spatial anatomical factors and non-spatial modality factors. We demonstrate that this high-level representation is ideally suited for several medical image analysis tasks, such as semi-supervised segmentation, multi-task segmentation and regression, and image-to-image synthesis. Specifically, we show that our model can match the performance of fully supervised segmentation models, using only a fraction of the labelled images. Critically, we show that our factorised representation also benefits from supervision obtained either when we use auxiliary tasks to train the model in a multi-task setting (e.g. regressing to known cardiac indices), or when aggregating multimodal data from different sources (e.g. pooling together MRI and CT data). To explore the properties of the learned factorisation, we perform latent-space arithmetic and show that we can synthesise CT from MR and vice versa, by swapping the modality factors. We also demonstrate that the factor holding image specific information can be used to predict the input modality with high accuracy. Code will be made available at https://github.com/agis85/anatomy_modality_decomposition.) <|cite_end|> and for
segmentation <|cite_start|> (Reference: {Conditioning Convolutional Segmentation Architectures with Non-Imaging Data: We compare two conditioning mechanisms based on concatenation and feature-wise modulation to integrate non-imaging information into convolutional neural networks for segmentation of anatomical structures. As a proof-of-concept we provide the distribution of class labels obtained from ground truth masks to ensure strong correlation between the conditioning data and the segmentation maps. We evaluate the methods on the ACDC dataset, and show that conditioning with non-imaging data improves performance of the segmentation networks. We observed conditioning the U-Net architectures was challenging, where no method gave significant improvement. However, the same architecture without skip connections outperforms the baseline with feature-wise modulation, and the relative performance increases as the training size decreases.) <|cite_end|>.
A drawback of both AdaIN and FiLM is that they manipulate whole feature maps in
an affine fashion, making the methods insensitive to spatial processing. To
overcome this limitation, SPADE <|cite_start|> (Reference: Semantic Image Synthesis with Spatially-Adaptive Normalization: We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to ``wash away'' semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows user control over both semantic and style. Code is available at https://github.com/NVlabs/SPADE .) <|cite_end|> was proposed, where a
segmentation mask is used as a conditioning input in the task of image
synthesis, leading to both feature-wise and class-wise scale and shift
parameters at each layer. This method is not suitable if the non-imaging
information cannot be conveniently expressed in image space.
The closest method to ours is <|cite_start|> (Reference: Guide Me: Interacting with Deep Networks: Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from text descriptions, less focus has been placed on the use of language to guide or improve the performance of a learned visual processing algorithm. In this paper, we explore methods to flexibly guide a trained convolutional neural network through user input to improve its performance during inference. We do so by inserting a layer that acts as a spatio-semantic guide into the network. This guide is trained to modify the network's activations, either directly via an energy minimization scheme or indirectly through a recurrent model that translates human language queries to interaction weights. Learning the verbal interaction is fully automatic and does not require manual text annotations. We evaluate the method on two datasets, showing that guiding a pre-trained network can improve performance, and provide extensive insights into the interaction between the guide and the CNN.) <|cite_end|>. The authors proposed to
extend FiLM with spatial attention, creating a Guiding Block layer, in which the
spatial attention is defined as two vectors $\alpha \in \mathbb{R}^H$ and $\beta
\in \mathbb{R}^W$ which are replicated over the $H$ and $W$ axes and added to
the global scale factor ($\gamma_c^{(s)}$) as shown in Eq.~\ref{eg:guideme} (the
authors call the shifting factor as $\gamma_c^{(b)}$). This spatial conditioning
is expensive as there are an additional $H + W$ parameters to learn; perhaps for
this reason, a single attention mechanism is learned for each layer and applied
across all feature maps.
\begin{equation}
F'_{h, w, c} = (1 + \alpha_h + \beta_w + \gamma_c^{(s)}) F_{h, w, c} + \gamma_c^{(b)}
\label{eg:guideme}
\end{equation}
\noindent
In our work, we utilise a learned attention mechanism for each feature map. Our
mechanism is similar to <|cite_start|> (Reference: Hierarchical Attentive Recurrent Tracking: Class-agnostic object tracking is particularly difficult in cluttered environments as target specific discriminative models cannot be learned a priori. Inspired by how the human visual cortex employs spatial attention and separate "where" and "what" processing pathways to actively suppress irrelevant visual features, this work develops a hierarchical attentive recurrent model for single object tracking in videos. The first layer of attention discards the majority of background by selecting a region containing the object of interest, while the subsequent layers tune in on visual features particular to the tracked object. This framework is fully differentiable and can be trained in a purely data driven fashion by gradient methods. To improve training convergence, we augment the loss function with terms for a number of auxiliary tasks relevant for tracking. Evaluation of the proposed model is performed on two datasets: pedestrian tracking on the KTH activity recognition dataset and the more difficult KITTI object tracking dataset.) <|cite_end|>, where the product of
two Gaussian matrices parametrised by mean ($\mu$), standard deviation
($\sigma$) and stride ($\gamma$) between consecutive Gaussians (one Gaussian per
row, one matrix per axis) is constructed. However, the relation between standard
deviations and strides is estimated before the training and kept fixed. Our
method applies a single Gaussian vector per axis (no stride) and we train the
whole method end-to-end. Further, the parameters
in <|cite_start|> (Reference: Hierarchical Attentive Recurrent Tracking: Class-agnostic object tracking is particularly difficult in cluttered environments as target specific discriminative models cannot be learned a priori. Inspired by how the human visual cortex employs spatial attention and separate "where" and "what" processing pathways to actively suppress irrelevant visual features, this work develops a hierarchical attentive recurrent model for single object tracking in videos. The first layer of attention discards the majority of background by selecting a region containing the object of interest, while the subsequent layers tune in on visual features particular to the tracked object. This framework is fully differentiable and can be trained in a purely data driven fashion by gradient methods. To improve training convergence, we augment the loss function with terms for a number of auxiliary tasks relevant for tracking. Evaluation of the proposed model is performed on two datasets: pedestrian tracking on the KTH activity recognition dataset and the more difficult KITTI object tracking dataset.) <|cite_end|> are estimated using consecutive input images
whilst we use an auxiliary conditioning input, and we combine with a FiLM layer. <|paper_end|> | [
"<|reference_start|> Automated cardiovascular magnetic resonance image analysis with fully convolutional networks: Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV). By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance on par with human experts in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. <|reference_end|>",
"<|reference_start|> Few-Shot Adversarial Learning of Realistic Neural Talking Head Models: Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings. <|reference_end|>",
"<|reference_start|> AdaptIS: Adaptive Instance Selection Network: We present Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point $(x, y)$, it generates a mask for the object located at $(x, y)$. The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. AdaptIS can be easily combined with standard semantic segmentation pipeline to perform panoptic segmentation. To illustrate the idea, we perform experiments on a challenging toy problem with difficult occlusions. Then we extensively evaluate the method on panoptic segmentation benchmarks. We obtain state-of-the-art results on Cityscapes and Mapillary even without pretraining on COCO, and show competitive results on a challenging COCO dataset. The source code of the method and the trained models are available at https://github.com/saic-vul/adaptis. <|reference_end|>",
"<|reference_start|> FiLM: Visual Reasoning with a General Conditioning Layer: We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot. <|reference_end|>"
] | [
0,
9,
10,
11
] | {"<|cite_1|>": "arxiv-138159", "<|cite_2|>": "ss-1426198", "<|cite_4|>": "arxiv-174408", "<|cite_5|>": "arxiv-119552", "<|cite_6|>": "arxiv-135428", "<|cite_7|>": "arxiv-113025", "<|cite_8|>": "ss-1036160", "<|cite_9|>": "arxiv-108534", "<|cite_10|>": "arxiv-119552", "<|cite_11|>": "arxiv-205006", "<|cite_12|>": "arxiv-224169", "<|cite_13|>": "arxiv-135428", "<|cite_14|>": "arxiv-196391", "<|cite_15|>": "ss-1426199", "<|cite_16|>": "arxiv-195738", "<|cite_17|>": "arxiv-153368", "<|cite_18|>": "arxiv-127885", "<|cite_19|>": "arxiv-127885"} |
1105.0165 | <|paper_start|> Title: Quantum counter automata
Abstract: Quantum counter automata: The question of whether quantum real-time one-counter automata (rtQ1CAs) can outperform their probabilistic counterparts has been open for more than a decade. We provide an affirmative answer to this question, by demonstrating a non-context-free language that can be recognized with perfect soundness by a rtQ1CA. This is the first demonstration of the superiority of a quantum model to the corresponding classical one in the real-time case with an error bound less than 1. We also introduce a generalization of the rtQ1CA, the quantum one-way one-counter automaton (1Q1CA), and show that they too are superior to the corresponding family of probabilistic machines. For this purpose, we provide general definitions of these models that reflect the modern approach to the definition of quantum finite automata, and point out some problems with previous results. We identify several remaining open problems.
Introduction
\label{section:Introduction}
Although a complete understanding of the relationship between the polynomial time complexity classes corresponding to classical and quantum computers seems to be a distant goal, a restricted version of this question for constant-memory machines has already been answered: Linear-time quantum finite automata (QFAs) that are allowed to pause for some steps on a symbol in a single left-to-right scan of the input string can solve some problems for which probabilistic machines, even with two-way access to their input, require exponential time <|cite_start|> (Reference: Undecidability on quantum finite automata: Our model in this paper is a 1.5.way quantum finite automaton which can move its head 0 or +1 position but not -1 position. It is shown that the most fundamental decision question, the emptiness problem, is not solvable for this model. Note that the emptiness problem is solvable far push-down automata and even for one-way nondeterministic stack automata.) <|cite_end|> <|cite_start|> (Reference: A time complexity gap for two-way probabilistic finite-state automata: It is shown that if a two-way probabilistic finite-state automaton (2pfa) M recognizes a nonregular language L with error probability bounded below $\frac{1}{2}$, then there is a positive constant b (depending on M) such that, for infinitely many inputs x, the expected running time of M on input x must exceed $2^{n^{b}}$ where n is the length of x. This complements a result of Freivalds showing that 2pfa’s can recognize certain nonregular languages in exponential expected time. It also establishes a time complexity gap for 2pfa’s, since any regular language can be recognized by some 2pfa in linear time. Other results give roughly exponential upper and lower bounds on the worst-case increase in the number of states when converting a polynomial-time 2pfa to an equivalent two-way nondeterministic finite-state automaton or to an equivalent one-way deterministic finite-state automaton.) <|cite_end|>. Interestingly, when these automata are further restricted to perform real-time access to the input, i.e. forbidden to pause, the probabilistic and quantum models have identical language recognition power <|cite_start|> (Reference: Extending Stochastic and Quantum Functions
: ) <|cite_end|> <|cite_start|> (Reference: One-way topological automata and the tantalizing effects of their topological features: We cast new light on the existing models of 1-way deterministic topological automata by introducing a new, convenient model, in which, as each input symbol is read, an interior system of an automaton, known as a configuration, continues to evolve in a topological space by applying continuous transition operators one by one. The acceptance and rejection of a given input are determined by observing the interior system after the input is completely processed. Such automata naturally generalize 1-way finite automata of various types, including deterministic, probabilistic, quantum, and pushdown automata. We examine the strengths and weaknesses of the power of this new automata model when recognizing formal languages. We investigate tantalizing effects of various topological features of our topological automata by analyzing their behaviors when different kinds of topological spaces and continuous maps, which are used respectively as configuration spaces and transition operators, are provided to the automata.) <|cite_end|> <|cite_start|> (Reference: Unbounded-error quantum computation with small space bounds: We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound $ s $ satisfying $ s(n)=o(\log \log n) $. For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for $s(n)=o(\log n) $. We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.) <|cite_end|>. To our knowledge, no quantum automaton model has yet been shown to outperform its probabilistic counterpart in terms of language recognition with one-sided error bound less than 1 in the real-time mode, which corresponds to the smallest possible nontrivial time bound. In this paper, we give the first demonstration of such a superiority in the case of the real-time one-counter automaton model.
One-counter automata can simply be thought of as finite automata enhanced by the addition of a single integer counter of unlimited capacity. Instructions in the programming language of these machines can increment or decrement this counter, and test its value for being zero, in addition to the standard state transition actions of finite automata.
The study of quantum real-time one-counter automata
was initiated by Kravtsev <|cite_start|> (Reference: Quantum Finite One-Counter Automata: ) <|cite_end|>, who based his definition on a popular QFA model of the time, introduced by Kondacs and Watrous <|cite_start|> (Reference: On the power of quantum finite state automata: In this paper, we introduce 1-way and 2-way quantum finite state automata (1qfa's and 2qfa's), which are the quantum analogues of deterministic, nondeterministic and probabilistic 1-way and 2-way finite state automata. We prove the following facts regarding 2qfa's. 1. For any /spl epsiv/>0, there is a 2qfa M which recognizes the non-regular language L={a/sup m/b/sup m/|m/spl ges/1} with (one-sided) error bounded by E, and which halts in linear time. Specifically, M accepts any string in L with probability 1 and rejects any string not in L with probability at least 1-/spl epsiv/. 2. For every regular language L, there is a reversible (and hence quantum) 2-way finite state automaton which recognizes L and which runs in linear time. In fact, it is possible to define 2qfar's which recognize the non-context-free language {a/sup m/b/sup m/c/sup m/|m/spl ges/1}, based on the same technique used for 1. Consequently, the class of languages recognized by linear time, bounded error 2qfa's properly includes the regular languages. Since it is known that 2-way deterministic, nondeterministic and polynomial expected time, bounded error probabilistic finite automata can recognize only regular languages, it follows that 2qfa's are strictly more powerful than these "classical" models. In the case of 1-way automata, the situation is reversed. We prove that the class of languages recognizable by bounded error 1qfa's is properly contained in the class of regular languages.) <|cite_end|>. Kravtsev provided some example machines recognizing certain languages, all of which were later shown by Yamasaki \textit{et al.} <|cite_start|> (Reference: One-way probabilistic reversible and quantum one-counter automata: ) <|cite_end|> to be also recognizable by classical probabilistic reversible real-time automata.
It is now accepted <|cite_start|> (Reference: Quantum automata with open time evolution: In this paper, a model for finite automaton with an open quantum evolution is introduced, and its basic properties are studied. It is shown that the (fuzzy) languages accepted by open evolution quantum automata obey various closure properties. More importantly, it is shown that major other models of finite automata, including probabilistic, measure once quantum, measure many quantum, and Latvian quantum automata can be simulated by the open quantum evolution automata without increasing the number of the states.) <|cite_end|> <|cite_start|> (Reference: Unbounded-error quantum computation with small space bounds: We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound $ s $ satisfying $ s(n)=o(\log \log n) $. For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for $s(n)=o(\log n) $. We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.) <|cite_end|> that the Kondacs-Watrous QFA model has been defined in an unnecessarily restricted way, and does not utilize the full flexibility provided by quantum physics. As a result, these QFAs are strictly less powerful than even classical deterministic finite automata <|cite_start|> (Reference: On the power of quantum finite state automata: In this paper, we introduce 1-way and 2-way quantum finite state automata (1qfa's and 2qfa's), which are the quantum analogues of deterministic, nondeterministic and probabilistic 1-way and 2-way finite state automata. We prove the following facts regarding 2qfa's. 1. For any /spl epsiv/>0, there is a 2qfa M which recognizes the non-regular language L={a/sup m/b/sup m/|m/spl ges/1} with (one-sided) error bounded by E, and which halts in linear time. Specifically, M accepts any string in L with probability 1 and rejects any string not in L with probability at least 1-/spl epsiv/. 2. For every regular language L, there is a reversible (and hence quantum) 2-way finite state automaton which recognizes L and which runs in linear time. In fact, it is possible to define 2qfar's which recognize the non-context-free language {a/sup m/b/sup m/c/sup m/|m/spl ges/1}, based on the same technique used for 1. Consequently, the class of languages recognized by linear time, bounded error 2qfa's properly includes the regular languages. Since it is known that 2-way deterministic, nondeterministic and polynomial expected time, bounded error probabilistic finite automata can recognize only regular languages, it follows that 2qfa's are strictly more powerful than these "classical" models. In the case of 1-way automata, the situation is reversed. We prove that the class of languages recognizable by bounded error 1qfa's is properly contained in the class of regular languages.) <|cite_end|>, and this weakness also affected the Kravtsev model of quantum counter automata, with Yamasaki \textit{et al.} even demonstrating <|cite_start|> (Reference: One-way probabilistic reversible and quantum one-counter automata: ) <|cite_end|> a regular language which they could not recognize with bounded error. On the other hand, Bonner \textit{et al.} <|cite_start|> (Reference: Quantum versus Probabilistic One-Way Finite Automata with Counter: ) <|cite_end|> claimed to demonstrate a Kravtsev machine recognizing a particular language that could not be recognized by any probabilistic real-time one-counter automaton.
We should also note that Yamasaki \textit{et al.} <|cite_start|> (Reference: Quantum versus deterministic counter automata: ) <|cite_end|> studied quantum one-counter automata with two-way access to their input, and have shown that the class of languages they recognize with bounded error contains some languages that are unrecognizable by deterministic two-way one-counter automata that are restricted to perform a fixed number of counter reversals. The fairer question about whether these two-way quantum machines can outperform their \textit{probabilistic} counterparts, which are also allowed to make bounded error, remains open.
The modern approach to the definition of quantum computer models <|cite_start|> (Reference: On the complexity of simulating space-bounded quantum computations: ) <|cite_end|> <|cite_start|> (Reference: Unbounded-error quantum computation with small space bounds: We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound $ s $ satisfying $ s(n)=o(\log \log n) $. For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for $s(n)=o(\log n) $. We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.) <|cite_end|> has as an easy consequence that any quantum machine can simulate its probabilistic counterpart efficiently, and the real question is whether the quantum version can outperform the classical one or not. In this paper,
we first provide a general definition of the quantum real-time one-counter
automaton (rtQ1CA) that reflects this modern approach. In an earlier version of this manuscript, we pointed out that the above-mentioned result of Bonner \textit{et al.} about the relationship of the quantum and classical real-time one-counter automaton models is flawed, and identified the related question as still being open. Here, we provide our own proof about a different language where the quantum model is indeed superior to its probabilistic counterpart. We then define a new model, the quantum one-way one-counter automaton, and prove a stronger result about the comparative powers of quantum and classical machines of this type. It turns out that the ability to pause the read head on the tape for some steps can be used to perform error reduction in quantum one-way machines. We also make some observations about the relationship of two-way quantum counter automata with some seemingly more restricted models. <|paper_end|> | [
"<|reference_start|> Undecidability on quantum finite automata: Our model in this paper is a 1.5.way quantum finite automaton which can move its head 0 or +1 position but not -1 position. It is shown that the most fundamental decision question, the emptiness problem, is not solvable for this model. Note that the emptiness problem is solvable far push-down automata and even for one-way nondeterministic stack automata. <|reference_end|>",
"<|reference_start|> A time complexity gap for two-way probabilistic finite-state automata: It is shown that if a two-way probabilistic finite-state automaton (2pfa) M recognizes a nonregular language L with error probability bounded below $\\frac{1}{2}$, then there is a positive constant b (depending on M) such that, for infinitely many inputs x, the expected running time of M on input x must exceed $2^{n^{b}}$ where n is the length of x. This complements a result of Freivalds showing that 2pfa’s can recognize certain nonregular languages in exponential expected time. It also establishes a time complexity gap for 2pfa’s, since any regular language can be recognized by some 2pfa in linear time. Other results give roughly exponential upper and lower bounds on the worst-case increase in the number of states when converting a polynomial-time 2pfa to an equivalent two-way nondeterministic finite-state automaton or to an equivalent one-way deterministic finite-state automaton. <|reference_end|>",
"<|reference_start|> One-way probabilistic reversible and quantum one-counter automata: <|reference_end|>",
"<|reference_start|> Unbounded-error quantum computation with small space bounds: We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound $ s $ satisfying $ s(n)=o(\\log \\log n) $. For \"one-way\" Turing machines, where the input tape head is not allowed to move left, the above result holds for $s(n)=o(\\log n) $. We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity. <|reference_end|>"
] | [
0,
1,
11,
15
] | {"<|multi_cite_1_1|>": "ss-1931515", "<|multi_cite_1_2|>": "ss-2500945", "<|multi_cite_2_1|>": "ss-1908647", "<|multi_cite_2_2|>": "ss-1217458", "<|multi_cite_2_3|>": "arxiv-14996", "<|cite_3|>": "ss-2004159", "<|cite_4|>": "ss-723391", "<|cite_5|>": "ss-2004161", "<|multi_cite_6_1|>": "ss-1224916", "<|multi_cite_6_2|>": "arxiv-14996", "<|cite_7|>": "ss-723391", "<|cite_8|>": "ss-2004161", "<|cite_9|>": "ss-2004158", "<|cite_10|>": "ss-1688956", "<|multi_cite_11_1|>": "ss-1385852", "<|multi_cite_11_2|>": "arxiv-14996"} |
1601.07036 | <|paper_start|> Title: Coded Packet Transport for Optical Packet/Burst Switched Networks
Abstract: Coded Packet Transport for Optical Packet/Burst Switched Networks: This paper presents the Coded Packet Transport (CPT) scheme, a novel transport mechanism for Optical Packet/Burst Switched (OPS/OBS) networks. The CPT scheme exploits the combined benefits of source coding by erasure codes and path diversity to provide efficient means for recovering from packet loss due to contentions and path failures, and to provide non-cryptographic secrecy. In the CPT scheme, erasure coding is employed at the OPS/OBS ingress node to form coded packets, which are transmitted on disjoint paths from the ingress node to an egress node in the network. The CPT scheme allows for a unified view of Quality of Service (QoS) in OPS/OBS networks by linking the interactions between survivability, performance and secrecy. We provide analytical models that illustrate how QoS aspects of CPT are affected by the number of disjoint paths, packet overhead and processing delay.
Introduction
\vspace{0.1cm}
Optical Packet/Burst Switching (OPS/OBS) is a promising architecture for the future core network, enabling all-optical packet transport combined with statistical multiplexing for increased link utilization <|cite_start|> (Reference: Approaches to optical internet packet switching: Wavelength-division multiplexing is currently being deployed in telecommunications networks in order to satisfy the increased demand for capacity brought about by the explosion in Internet use. The most widely accepted network evolution prediction is via an extension of these initial predominantly point-to-point deployments, with limited system functionalities, into highly interconnected networks supporting circuit-switched paths. While current applications of WDM focus on relatively static usage of individual wavelength channels, optical switching technologies enable fast dynamic allocation of WDM channels. The challenge involves combining the advantages of these relatively coarse-grained WDM techniques with emerging optical switching capabilities to yield a high-throughput optical platform directly underpinning next-generation networks. One alternative longer-term strategy for network evolution employs optical packet switching, providing greater flexibility, functionality, and granularity. This article reviews progress on the definition of optical packet switching and routing networks capable of providing end-to-end optical paths and/or connectionless transport. To date the approaches proposed predominantly use fixed-duration optical packets with lower-bit-rate headers to facilitate processing at the network-node interfaces. Thus, the major advances toward the goal of developing an extensive optical packet-switched layer employing fixed-length packets are summarized, but initial concepts on the support of variable-length IP-like optical packets are also introduced. Particular strategies implementing the crucial optical buffering function at the switching nodes are described, motivated by the network functionalities required within the optical packet layer.) <|cite_end|>. By avoiding electronic processing of packets, OPS/OBS achieves significant energy savings compared to existing opaque packet switched architectures <|cite_start|> (Reference: Optical packet-switched wdm networks -- a cost and energy perspective: A collection of slides from the author's conference presentation on "Optical packet-switched WDM networks: a cost and energy perspective" is given.) <|cite_end|>. The increasing number of mission-critical services such as e-banking, e-voting and emergency services put a high demand on the Quality of Service (QoS) of the future Internet, including OPS/OBS networks. Specifically, the OPS/OBS network has to provide low packet loss rate (performance) <|cite_start|> (Reference: A comparison study between slotted and unslotted all-optical packet-switched network with priority-based routing: We present a comparison between a slotted and unslotted all-optical packet-switched network with priority-based routing. Packet loss rate below 0.01 with transmitter load less than 0.3 in the unslotted network is achieved by using buffering, limited wavelength conversion and deflection. OCIS codes: (060.4250) Networks.) <|cite_end|> <|cite_start|> (Reference: Traffic modelling of asynchronous bufferless optical packet switched networks: ) <|cite_end|> <|cite_start|> (Reference: Traffic models for slotted optical packet switched networks: ) <|cite_end|> <|cite_start|> (Reference: A unified study of contention-resolution schemes in optical packet-switched networks: This paper presents a comprehensive study of contention-resolution schemes in a multiwavelength optical packet-switched network. This investigation aims to provide a unified study of a network of optical routers, which include contention resolution in wavelength, time, and space dimensions. Specifically, we show: 1) how to accommodate all three dimensions of contention resolution in an integrated optical router; 2) how the performance of the three dimensions compare with one another; and 3) how various combinational schemes can be designed and how they perform. With the representative architectures and network topologies studied in this paper, the simulation experiment results capture the characteristics of different contention-resolution schemes, and they quantify the upper-bound average offered transmitter load for these schemes. The combinational contention resolution schemes are shown to effectively resolve packet contention and achieve good network performance under light to intermediate load.) <|cite_end|>, protection against node and link failures (survivability) <|cite_start|> (Reference: A 1+1 protection architecture for optical burst switched networks: High-capacity optical backbone networks protect their premium customers' information flows by routing two copies of the customer's data over disjoint paths. This scheme, known as 1+1 protection, provides extremely rapid recovery from network failures. We propose an architecture by which 1+1 protection can be extended to optical burst switched (OBS) networks. This architecture is designed by modifying the diversity routing architecture that was originally proposed for nonoptical packet networks and recently applied to networks employing the generalized multiprotocol label switched (GMPLS) architecture. We extend the architecture developed for just-in-time OBS signaling to support 1+1 protection. We also examine design issues that are raised by a difference in the propagation delays of the two disjoint paths across the OBS network. We show that a sufficiently large difference in the propagation delays can cause performance degradations that may result in an unsatisfactory quality-of-service on the protected connection. We examine the impact of this delay mismatch on restoration performance, probability of burst loss, and jitter. Through analysis and simulations, it is discussed how these negative effects can be eliminated.) <|cite_end|> <|cite_start|> (Reference: Combined study on survivability and performance in optical packet switched networks: Survivability and performance constitute crucial quality of service (QoS) issues in future optical packet switched (OPS) networks. I take an integrated view on survivability and performance in OPS networks by presenting the extended shared packet redundancy scheme (ESPRS). The ESPRS combines shared packet redundancy with 1+1 path protection functionality. I focus on the performance of the ESPRS in different failure situations and show how the packet loss rate is influenced by the number of node- and link-disjoint paths between a node pair, the loss probability on the paths, number of failures, the relative amount of redundancy, and the size of the packet set. An analytical model of the ESPRS is provided.) <|cite_end|>, as well as being able to withstand targeted eavesdropping attacks from individuals and organizations (secrecy) <|cite_start|> (Reference: Network coding, Fundamentals and Applications: Thank you very much for downloading network coding fundamentals and applications. As you may know, people have search hundreds times for their chosen readings like this network coding fundamentals and applications, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop. network coding fundamentals and applications is available in our book collection an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the network coding fundamentals and applications is universally compatible with any devices to read.) <|cite_end|>.
Existing approaches to satisfy these strict QoS demands in OPS/OBS rely on the provision of several independent QoS schemes, e.g., wavelength conversion to reduce packet loss from contentions and 1+1 path protection to provide survivability. However, since these schemes are deployed in the same physical and logical infrastructure, they will interact and provide mutual benefits. Examples of this include how the extra redundancy introduced for providing 1+1 path protection may be used to combat packet loss in OPS networks in failure-free operations, as studied in <|cite_start|> (Reference: Combined study on survivability and performance in optical packet switched networks: Survivability and performance constitute crucial quality of service (QoS) issues in future optical packet switched (OPS) networks. I take an integrated view on survivability and performance in OPS networks by presenting the extended shared packet redundancy scheme (ESPRS). The ESPRS combines shared packet redundancy with 1+1 path protection functionality. I focus on the performance of the ESPRS in different failure situations and show how the packet loss rate is influenced by the number of node- and link-disjoint paths between a node pair, the loss probability on the paths, number of failures, the relative amount of redundancy, and the size of the packet set. An analytical model of the ESPRS is provided.) <|cite_end|>. In particular, security threats in all-optical networks have recently received research attention <|cite_start|> (Reference: Secure optical burst switching: Framework and research directions: Optical burst switching has been positioned as a viable means of implementing optical communication efficiently. This article identifies potential threats to security in OBS networks. To alleviate the security threats in OBS networks, a secure Optical Burst Switching (S-OBS) framework is proposed. The S-OBS framework provides two levels of security measures: authentication of burst headers and confidentiality of data bursts. Candidate solutions in each category are examined, and research directions are presented.) <|cite_end|> <|cite_start|> (Reference: Attack detection methods for all-optical networks: This paper focuses on theoretical methods for detecting intentional attacks upon the infrastructure of an all-optical network. Applications of existing methods used in traditional networks, as well as discussion of a new method for detecting attacks are presented. Advantages and limitations of both classes of methods are considered.) <|cite_end|>. One crucial security threat is eavesdropping of data in the network, which has traditionally been countered using encryption. However, the high capacities of OPS/OBS networks greater than 100 Gb/s make data encryption in OPS/OBS not feasible as the current computational resources do not match the required encryption processing demands. Hence, there is a need for a low complexity scheme that provides a certain level of secure data transport without encrypting the data. Our goal is to show how erasure coding and path diversity can be used to mutually provide loss recovery from contentions, survivability and a secrecy of data.
The major contribution of this paper is the novel Coded Packet Transport (CPT) scheme for OPS/OBS networks. This scheme is able to recover lost data due to contentions and node/link failures, while at the same time providing secrecy. We use the term secrecy as defined in <|cite_start|> (Reference: Network coding, Fundamentals and Applications: Thank you very much for downloading network coding fundamentals and applications. As you may know, people have search hundreds times for their chosen readings like this network coding fundamentals and applications, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop. network coding fundamentals and applications is available in our book collection an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the network coding fundamentals and applications is universally compatible with any devices to read.) <|cite_end|>, where the goal is protection from a passive adversary that is not able to reconstruct the whole packet/burst set by eavesdropping on a single path. The CPT scheme is based on Forward Error Correction (FEC) codes used as erasure codes and provides non-cryptographic secrecy. The CPT scheme is applicable to both OPS and OBS networks, and for the remainder of the paper we use the term packet to also refer to a burst in OBS networks, without the loss of generality. At an OPS/OBS ingress node, a set of data packets is encoded into a set of coded packets by utilizing non-systematic erasure codes <|cite_start|> (Reference: Shared packet loss recovery for internet telephony: We consider an Internet telephony system in which the service provider operates a telephone gateway in each servicing city to serve the general public. We propose a shared packet loss recovery scheme for this system. Using this scheme, each gateway uses erasure code to add redundant packets such that its outgoing voice streams share these redundant packets for packet loss recovery. This scheme has two advantages: (1) it has a small probability of packet loss because multiple voice streams can share the redundant packets for effective packet loss recovery, and (2) it involves a small recovery delay because it uses the packets from multiple voice streams for packet loss recovery.) <|cite_end|> <|cite_start|> (Reference: Network layer packet redundancy in optical packet switched networks: A crucial issue in optical packet switched (OPS) networks is packet losses at the network layer caused by contentions. This paper presents the network layer packet redundancy scheme (NLPRS), which is a novel approach to reduce the end-to-end data packet loss rate in OPS networks. By introducing redundancy packets in the OPS network, the NLPRS enables a possible reconstruction of data packets that are lost due to contentions. An analytical model of the NLPRS based on reduced load Erlang fix-point analysis is presented. Simulations of an OPS ring network show that the NLPRS is in particular efficient in small networks operating at low system loads. Results also show how the arrival process, packet length distribution, network size and redundancy packet scheduling mechanism influence the NLPRS performance.) <|cite_end|> <|cite_start|> (Reference: Forward redundancy: a loss recovery mechanism for optical burst-switched networks: Optical burst switching is one of the most promising new optical transport paradigms for efficiently transporting data over an all-optical network. In this paper, we discuss forward redundancy as a candidate for loss recovery in an optical burst-switched network. We develop a simulation model to investigate the proposed forward redundancy loss recovery mechanism and to compare the performance of our proposed mechanism with the existing retransmission-based backward loss recovery mechanism. Our results show that the proposed forward redundancy mechanism significantly reduces packet loss as compared to a retransmission-based backward loss recovery mechanism, without the need for large ingress electronic buffers or high retransmission delays) <|cite_end|>. These coded packets are transmitted to an egress node in the OPS/OBS network on multiple disjoint paths. At an OPS/OBS egress node, reconstruction of packets lost due to contentions and node/link failures is enabled by the added redundancy. Sending different subsets of packets over disjoint paths between the ingress and the egress node also enables an end-to-end secrecy property against a passive adversary. To the best of our knowledge, this work constitutes a first step for providing a unified view on QoS in OPS/OBS networks, focusing on the interactions between survivability, performance and secrecy.
The rest of this paper is organized as follows: Section \ref{Related} discusses related works. In Section \ref{CPT} we present the CPT scheme. Section \ref{Constraints} presents the analytical model. The parameter settings based on the analytical model are presented in Section \ref{Setting}. Finally, Section \ref{Conclusion} concludes the paper. <|paper_end|> | [
"<|reference_start|> Traffic modelling of asynchronous bufferless optical packet switched networks: <|reference_end|>",
"<|reference_start|> Traffic models for slotted optical packet switched networks: <|reference_end|>",
"<|reference_start|> Network coding, Fundamentals and Applications: Thank you very much for downloading network coding fundamentals and applications. As you may know, people have search hundreds times for their chosen readings like this network coding fundamentals and applications, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop. network coding fundamentals and applications is available in our book collection an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the network coding fundamentals and applications is universally compatible with any devices to read. <|reference_end|>",
"<|reference_start|> Combined study on survivability and performance in optical packet switched networks: Survivability and performance constitute crucial quality of service (QoS) issues in future optical packet switched (OPS) networks. I take an integrated view on survivability and performance in OPS networks by presenting the extended shared packet redundancy scheme (ESPRS). The ESPRS combines shared packet redundancy with 1+1 path protection functionality. I focus on the performance of the ESPRS in different failure situations and show how the packet loss rate is influenced by the number of node- and link-disjoint paths between a node pair, the loss probability on the paths, number of failures, the relative amount of redundancy, and the size of the packet set. An analytical model of the ESPRS is provided. <|reference_end|>"
] | [
3,
4,
8,
9
] | {"<|cite_1|>": "ss-798663", "<|cite_2|>": "ss-798664", "<|multi_cite_3_1|>": "ss-798665", "<|multi_cite_3_2|>": "ss-798666", "<|multi_cite_3_3|>": "ss-798667", "<|multi_cite_3_4|>": "ss-798668", "<|multi_cite_4_1|>": "ss-798669", "<|multi_cite_4_2|>": "ss-798670", "<|cite_5|>": "ss-798671", "<|cite_6|>": "ss-798670", "<|multi_cite_7_1|>": "ss-798672", "<|multi_cite_7_2|>": "ss-798673", "<|cite_8|>": "ss-798671", "<|multi_cite_9_1|>": "ss-798674", "<|multi_cite_9_2|>": "ss-798675", "<|multi_cite_9_3|>": "ss-798676"} |
2209.01308 | <|paper_start|> Title: Multimodal and Crossmodal AI for Smart Data Analysis
Abstract: Multimodal and Crossmodal AI for Smart Data Analysis: Recently, the multimodal and crossmodal AI techniques have attracted the attention of communities. The former aims to collect disjointed and heterogeneous data to compensate for complementary information to enhance robust prediction. The latter targets to utilize one modality to predict another modality by discovering the common attention sharing between them. Although both approaches share the same target: generate smart data from collected raw data, the former demands more modalities while the latter aims to decrease the variety of modalities. This paper first discusses the role of multimodal and crossmodal AI in smart data analysis in general. Then, we introduce the multimodal and crossmodal AI framework (MMCRAI) to balance the abovementioned approaches and make it easy to scale into different domains. This framework is integrated into xDataPF (the cross-data platform https://www.xdata.nict.jp/). We also introduce and discuss various applications built on this framework and xDataPF.
Introduction
\label{INTRO}
We daily struggle with processing large amounts of (un)intentionally-collected raw data (e.g., statistics, numbers, texts, images, audio) to get insights from our world. Nevertheless, smart data is a type of data we want to have instead of dealing with raw data containing redundant, even useless information. Smart data results from raw data’s analysis and interpretation, making it possible to draw value from it effectively. Hence, we need intelligent layers to embed in data collectors and storage to produce such smart data for further downstream applications. The process that turns a set of raw data into smart data could be considered smart data analytics. We can see many algorithms, products, and techniques using the prefix "smart" to express that they have smart data in their products, such as smart IoT, smart dashcams, and smart clouds.
Human beings have cognition of the surrounding world by sensing from different perspectives (e.g., see, hear, smell, feel, taste). Hence, devices made by human beings tend to record/capture data of the surrounding world in the same way human beings do. Each type of data recorded/generated by a particular device/method represents how something happens or is experienced, and that representative can be concerned as a modality. A research problem or dataset that includes multiple such modalities is considered multimodal, and AI techniques that deal with multimodal are called multimodal AI.
The advantage of multimodal is that we can have joint representative space that can compensate for the lack of information on each disjoint modality and strengthen the robust prediction of high-correlation modalities. Hence, we can build models that process and correlate data from multiple modalities.
Many surveys have been done to understand the use of multimodal AI for smart data analysis. In <|cite_start|> (Reference: Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.) <|cite_end|>, the authors list out challenges of multimodal machine learning (e.g., representation, translation, alignment, fusion, co-learning), data types (e.g., texts, videos, images, audios) and applications (e.g., Speech recognition and synthesis, Event detection, Emotion and affect, Media description, Multimedia retrieval). In <|cite_start|> (Reference: A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets: ) <|cite_end|>, the authors emphasize a particular domain - computer vision, and introduce advances, trends, applications, and datasets of multimodal AI. In this survey, the authors discuss the general architecture of multimodal deep-learning, where a particular feature extraction first precedes each modality to create a modality representation. Then, these representations are fused into one joint representative space and project this space into one unique similarity measure. Several deep learning models are concerned in this survey, including ANN, CNN, RCNN, LSTM, etc.
In <|cite_start|> (Reference: Bidirectional joint representation learning with symmetrical deep neural networks for multimodal and crossmodal applications: Common approaches to problems involving multiple modalities (classification, retrieval, hyperlinking, etc.) are early fusion of the initial modalities and crossmodal translation from one modality to the other. Recently, deep neural networks, especially deep autoencoders, have proven promising both for crossmodal translation and for early fusion via multimodal embedding. In this work, we propose a flexible crossmodal deep neural network architecture for multimodal and crossmodal representation. By tying the weights of two deep neural networks, symmetry is enforced in central hidden layers thus yielding a multimodal representation space common to the two original representation spaces. The proposed architecture is evaluated in multimodal query expansion and multimodal retrieval tasks within the context of video hyperlinking. Our method demonstrates improved crossmodal translation capabilities and produces a multimodal embedding that significantly outperforms multimodal embeddings obtained by deep autoencoders, resulting in an absolute increase of 14.14 in precision at 10 on a video hyperlinking task.) <|cite_end|>, the authors mention crossmodal learning for dealing with the issue when there is a need for mapping from one modality to another and back, as well as representing them in joint representation space. This direction is similar to human beings' learning process - composers a global perspective from multiple distinct senses and resources. For example, text-image matching, text-video crossmodal retrieval, emotion recognition, and image-captioning are the most popular crossmodal applications where people can use one modality to query another one <|cite_start|> (Reference: A Comprehensive Survey on Cross-modal Retrieval: In recent years, cross-modal retrieval has drawn much attention due to the rapid growth of multimodal data. It takes one type of data as the query to retrieve relevant data of another type. For example, a user can use a text to retrieve relevant pictures or videos. Since the query and its retrieved results can be of different modalities, how to measure the content similarity between different modalities of data remains a challenge. Various methods have been proposed to deal with such a problem. In this paper, we first review a number of representative methods for cross-modal retrieval and classify them into two main groups: 1) real-valued representation learning, and 2) binary representation learning. Real-valued representation learning methods aim to learn real-valued common representations for different modalities of data. To speed up the cross-modal retrieval, a number of binary representation learning methods are proposed to map different modalities of data into a common Hamming space. Then, we introduce several multimodal datasets in the community, and show the experimental results on two commonly used multimodal datasets. The comparison reveals the characteristic of different kinds of cross-modal retrieval methods, which is expected to benefit both practical applications and future research. Finally, we discuss open problems and future research directions.) <|cite_end|> <|cite_start|> (Reference: CRET: Cross-Modal Retrieval Transformer for Efficient Text-Video Retrieval: Given a text query, the text-to-video retrieval task aims to find the relevant videos in the database. Recently, model-based (MDB) methods have demonstrated superior accuracy than embedding-based (EDB) methods due to their excellent capacity of modeling local video/text correspondences, especially when equipped with large-scale pre-training schemes like ClipBERT. Generally speaking, MDB methods take a text-video pair as input and harness deep models to predict the mutual similarity, while EDB methods first utilize modality-specific encoders to extract embeddings for text and video, then evaluate the distance based on the extracted embeddings. Notably, MDB methods cannot produce explicit representations for text and video, instead, they have to exhaustively pair the query with every database item to predict their mutual similarities in the inference stage, which results in significant inefficiency in practical applications. In this work, we propose a novel EDB method CRET (Cross-modal REtrieval Transformer), which not only demonstrates promising efficiency in retrieval tasks, but also achieves better accuracy than existing MDB methods. The credits are mainly attributed to our proposed Cross-modal Correspondence Modeling (CCM) module and Gaussian Estimation of Embedding Space (GEES) loss. Specifically, the CCM module is composed by transformer decoders and a set of decoder centers. With the help of the learned decoder centers, the text/video embeddings can be efficiently aligned, without suffering from pairwise model-based inference. Moreover, to balance the information loss and computational overhead when sampling frames from a given video, we present a novel GEES loss, which implicitly conducts dense sampling in the video embedding space, without suffering from heavy computational cost. Extensive experiments show that without pre-training on extra datasets, our proposed CRET outperforms the state-of-the-art MDB methods that were pre-trained on additional datasets, meanwhile still shows promising efficiency in retrieval tasks.) <|cite_end|> <|cite_start|> (Reference: Self-Supervised learning with cross-modal transformers for emotion recognition: Emotion recognition is a challenging task due to limited availability of in-the-wild labeled datasets. Self-supervised learning has shown improvements on tasks with limited labeled datasets in domains like speech and natural language. Models such as BERT learn to incorporate context in word embeddings, which translates to improved performance in downstream tasks like question answering. In this work, we extend self-supervised training to multi-modal applications. We learn multi-modal representations using a transformer trained on the masked language modeling task with audio, visual and text features. This model is fine-tuned on the downstream task of emotion recognition. Our results on the CMU-MOSEI dataset show that this pre-training technique can improve the emotion recognition performance by up to 3% compared to the baseline.) <|cite_end|>. The main difference between multimodal and crossmodal learning is that crossmodal requires sharing characteristics of different modalities to compensate for the lack of information towards enabling the ability to use data of one modality to retrieve/query/predict data of another modality. Unfortunately, this research direction is far from the expectation and has a big gap among research teams and domains <|cite_start|> (Reference: Cross-Modal Learning: Adaptivity, Prediction and Interaction: ) <|cite_end|>.
In light of the abovementioned discussions, we are conducting research and development to build a multimodal and crossmodal AI framework for smart data analysis. The framework aims to provide additional intelligent layers to data analysis progress that can flexibly change from using only multimodal AI, crossmodal AI, or hybrid multi-crossmodal AI for analyzing data. We also introduce several instances of this framework designed for a particular domain, such as air pollution forecast, congestion prediction, and traffic incident querying. <|paper_end|> | [
"<|reference_start|> Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research. <|reference_end|>",
"<|reference_start|> A Comprehensive Survey on Cross-modal Retrieval: In recent years, cross-modal retrieval has drawn much attention due to the rapid growth of multimodal data. It takes one type of data as the query to retrieve relevant data of another type. For example, a user can use a text to retrieve relevant pictures or videos. Since the query and its retrieved results can be of different modalities, how to measure the content similarity between different modalities of data remains a challenge. Various methods have been proposed to deal with such a problem. In this paper, we first review a number of representative methods for cross-modal retrieval and classify them into two main groups: 1) real-valued representation learning, and 2) binary representation learning. Real-valued representation learning methods aim to learn real-valued common representations for different modalities of data. To speed up the cross-modal retrieval, a number of binary representation learning methods are proposed to map different modalities of data into a common Hamming space. Then, we introduce several multimodal datasets in the community, and show the experimental results on two commonly used multimodal datasets. The comparison reveals the characteristic of different kinds of cross-modal retrieval methods, which is expected to benefit both practical applications and future research. Finally, we discuss open problems and future research directions. <|reference_end|>",
"<|reference_start|> CRET: Cross-Modal Retrieval Transformer for Efficient Text-Video Retrieval: Given a text query, the text-to-video retrieval task aims to find the relevant videos in the database. Recently, model-based (MDB) methods have demonstrated superior accuracy than embedding-based (EDB) methods due to their excellent capacity of modeling local video/text correspondences, especially when equipped with large-scale pre-training schemes like ClipBERT. Generally speaking, MDB methods take a text-video pair as input and harness deep models to predict the mutual similarity, while EDB methods first utilize modality-specific encoders to extract embeddings for text and video, then evaluate the distance based on the extracted embeddings. Notably, MDB methods cannot produce explicit representations for text and video, instead, they have to exhaustively pair the query with every database item to predict their mutual similarities in the inference stage, which results in significant inefficiency in practical applications. In this work, we propose a novel EDB method CRET (Cross-modal REtrieval Transformer), which not only demonstrates promising efficiency in retrieval tasks, but also achieves better accuracy than existing MDB methods. The credits are mainly attributed to our proposed Cross-modal Correspondence Modeling (CCM) module and Gaussian Estimation of Embedding Space (GEES) loss. Specifically, the CCM module is composed by transformer decoders and a set of decoder centers. With the help of the learned decoder centers, the text/video embeddings can be efficiently aligned, without suffering from pairwise model-based inference. Moreover, to balance the information loss and computational overhead when sampling frames from a given video, we present a novel GEES loss, which implicitly conducts dense sampling in the video embedding space, without suffering from heavy computational cost. Extensive experiments show that without pre-training on extra datasets, our proposed CRET outperforms the state-of-the-art MDB methods that were pre-trained on additional datasets, meanwhile still shows promising efficiency in retrieval tasks. <|reference_end|>",
"<|reference_start|> Cross-Modal Learning: Adaptivity, Prediction and Interaction: <|reference_end|>"
] | [
0,
3,
4,
6
] | {"<|cite_1|>": "arxiv-125183", "<|cite_2|>": "ss-1481807", "<|cite_3|>": "ss-1514419", "<|cite_4|>": "arxiv-102526", "<|cite_5|>": "ss-688215", "<|cite_6|>": "arxiv-304937", "<|cite_7|>": "ss-959103"} |
2310.03182 | <|paper_start|> Title: Robust and Interpretable Medical Image Classifiers via Concept Bottleneck Models
Abstract: Robust and Interpretable Medical Image Classifiers via Concept Bottleneck Models: Medical image classification is a critical problem for healthcare, with the potential to alleviate the workload of doctors and facilitate diagnoses of patients. However, two challenges arise when deploying deep learning models to real-world healthcare applications. First, neural models tend to learn spurious correlations instead of desired features, which could fall short when generalizing to new domains (e.g., patients with different ages). Second, these black-box models lack interpretability. When making diagnostic predictions, it is important to understand why a model makes a decision for trustworthy and safety considerations. In this paper, to address these two limitations, we propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts. Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model. We systematically evaluate our method on eight medical image classification datasets to verify its effectiveness. On challenging datasets with strong confounding factors, our method can mitigate spurious correlations thus substantially outperform standard visual encoders and other baselines. Finally, we show how classification with a small number of concepts brings a level of interpretability for understanding model decisions through case studies in real medical data.
Introduction
Medical image classification is a critical yet challenging problem in machine learning for healthcare. The development of deep learning models has demonstrated great success <|cite_start|> (Reference: A Survey on Deep Learning in Medical Image Analysis: Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.) <|cite_end|> <|cite_start|> (Reference: Big Self-Supervised Models Advance Medical Image Classification: Self-supervised pretraining followed by supervised fine-tuning has seen success in image recognition, especially when labeled examples are scarce, but has received limited attention in medical image analysis. This paper studies the effectiveness of self-supervised learning as a pretraining strategy for medical image classification. We conduct experiments on two distinct tasks: dermatology skin condition classification from digital camera images and multi-label chest X-ray classification, and demonstrate that self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled domain-specific medical images significantly improves the accuracy of medical image classifiers. We introduce a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case, when available, to construct more informative positive pairs for self-supervised learning. Combining our contributions, we achieve an improvement of 6.7% in top-1 accuracy and an improvement of 1.1% in mean AUC on dermatology and chest X-ray classification respectively, outperforming strong supervised baselines pretrained on ImageNet. In addition, we show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images.) <|cite_end|> <|cite_start|> (Reference: Transformers in Medical Imaging: A Survey: Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as {de facto} operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, reconstruction, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at \url{https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging}.) <|cite_end|> <|cite_start|> (Reference: Semi-supervised Multi-Label Classification with 3D CBAM Resnet for Tuberculosis Cavern Report: Detection and characterization of tuberculosis and the evaluation of lesion characteristics are challenging. To provide a solution for a multi-label classification task of tuberculosis cavern report task, we performed a deep learning study with backbones of 3D Resnet. Semi-supervised learning strategy was implied in this study to leverage the unlabeled dataset from cavern detection task. A convolutional block attention model (CBAM) was used to add an attention mechanism in each block of the Resnet to further improve the performance of the convolutional neural network (CNN). Our solution is ranked the 1st place with submissions obtained Mean_AUC of 0.687 and 0.681 for this task.) <|cite_end|>, by achieving superior performance in benchmarks and competitions. However, there are two unsolved problems that still prevent us from deploying these models in clinical usages.
The first problem is the presence of confounding factors <|cite_start|> (Reference: Confounding variables can degrade generalization performance of radiological deep learning models: Early results in using convolutional neural networks (CNNs) on x-rays to diagnose disease have been promising, but it has not yet been shown that models trained on x-rays from one hospital or one group of hospitals will work equally well at different hospitals. Before these tools are used for computer-aided diagnosis in real-world clinical settings, we must verify their ability to generalize across a variety of hospital systems. A cross-sectional design was used to train and evaluate pneumonia screening CNNs on 158,323 chest x-rays from NIH (n=112,120 from 30,805 patients), Mount Sinai (42,396 from 12,904 patients), and Indiana (n=3,807 from 3,683 patients). In 3 / 5 natural comparisons, performance on chest x-rays from outside hospitals was significantly lower than on held-out x-rays from the original hospital systems. CNNs were able to detect where an x-ray was acquired (hospital system, hospital department) with extremely high accuracy and calibrate predictions accordingly. The performance of CNNs in diagnosing diseases on x-rays may reflect not only their ability to identify disease-specific imaging findings on x-rays, but also their ability to exploit confounding information. Estimates of CNN performance based on test data from hospital systems used for model training may overstate their likely real-world performance.) <|cite_end|> <|cite_start|> (Reference: Public covid-19 x-ray datasets and their impact on model bias--a systematic review of a significant problem: Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.) <|cite_end|> in medical data that hurts generalization. Neural networks are prone to learn spurious correlations for classification tasks.
In the non-medical domain, <|cite_start|> (Reference: An Investigation of Why Overparameterization Exacerbates Spurious Correlations: We study why overparameterization -- increasing model size well beyond the point of zero training error -- can hurt test error on minority groups despite improving average test error when there are spurious correlations in the data. Through simulations and experiments on two image datasets, we identify two key properties of the training data that drive this behavior: the proportions of majority versus minority groups, and the signal-to-noise ratio of the spurious correlations. We then analyze a linear setting and theoretically show how the inductive bias of models towards "memorizing" fewer examples can cause overparameterization to hurt. Our analysis leads to a counterintuitive approach of subsampling the majority group, which empirically achieves low minority error in the overparameterized regime, even though the standard approach of upweighting the minority fails. Overall, our results suggest a tension between using overparameterized models versus using all the training data for achieving low worst-group error.) <|cite_end|> found that models trained on the Waterbirds dataset correlate waterbirds with backgrounds containing water, and models trained on the CelebA dataset correlate males with dark hair.
This could be more of an issue for medical image classification, as confounding factors broadly exist and labeled data are often limited <|cite_start|> (Reference: Machine learning approaches in medical image analysis: From detection to diagnosis: ) <|cite_end|>.
Take the classification of patient X-rays between Covid-19 and normal for instance, certain factors such as the hospitals where the X-rays are performed and the age of the patient strongly correlate with the target disease classification.
To quantify this issue, we curated datasets of known confounding factors such as hospitals, age and gender, and found that standard visual classifiers and previous popular methods designed to mitigate spurious correlations often perform poorly and struggle to generalize in these datasets. As a concrete example, instead of learning to predict Covid or normal, the classifier might instead learn to predict if the X-ray is from a young or old patient.
The second problem is the lack of interpretability. Deep neural networks are inherently ``black-box'' models due to their complex non-linear structures. This will raise safety and trust issues, as it is hard for human to understand model behaviors and trust model decisions at ease. It is especially important in clinical settings <|cite_start|> (Reference: Why black box machine learning should be avoided for high-stakes decisions, in brief: ) <|cite_end|>, as the adoption of deep learning models relies on building trust with healthcare professionals and patients. Clinicians often need to understand the underlying reasoning of the models to carefully make their decisions. Interpretable medical image classification models <|cite_start|> (Reference: Global and Local Interpretability for Cardiac MRI Classification: Deep learning methods for classifying medical images have demonstrated impressive accuracy in a wide range of tasks but often these models are hard to interpret, limiting their applicability in clinical practice. In this work we introduce a convolutional neural network model for identifying disease in temporal sequences of cardiac MR segmentations which is interpretable in terms of clinically familiar measurements. The model is based around a variational autoencoder, reducing the input into a low-dimensional latent space in which classification occurs. We then use the recently developed `concept activation vector' technique to associate concepts which are diagnostically meaningful (eg. clinical biomarkers such as `low left-ventricular ejection fraction') to certain vectors in the latent space. These concepts are then qualitatively inspected by observing the change in the image domain resulting from interpolations in the latent space in the direction of these vectors. As a result, when the model classifies images it is also capable of providing naturally interpretable concepts relevant to that classification and demonstrating the meaning of those concepts in the image domain. Our approach is demonstrated on the UK Biobank cardiac MRI dataset where we detect the presence of coronary artery disease.) <|cite_end|> <|cite_start|> (Reference: An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization: Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we extend the globally-aware multiple instance classifier, a framework we proposed to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a final prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, consisting of more than one million images, our model achieves an AUC of 0.93 in classifying breasts with malignant findings, outperforming ResNet-34 and Faster R-CNN. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11. The proposed model is available online: https://github.com/nyukat/GMIC.) <|cite_end|> <|cite_start|> (Reference: Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer: ) <|cite_end|> <|cite_start|> (Reference: Explainable artificial intelligence (XAI) in deep learning-based medical image analysis: With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.) <|cite_end|> allow for better error analysis, bias detection, ensuring patients safety, and trust building.
But how do we build a system that is both robust and interpretable? Inspired by recent work that uses concepts <|cite_start|> (Reference: Concept Bottleneck Models: We seek to learn models that we can interact with using high-level concepts: if the model did not think there was a bone spur in the x-ray, would it still predict severe arthritis? State-of-the-art models today do not typically support the manipulation of concepts like "the existence of bone spurs", as they are trained end-to-end to go directly from raw input (e.g., pixels) to output (e.g., arthritis severity). We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label. By construction, we can intervene on these concept bottleneck models by editing their predicted concept values and propagating these changes to the final prediction. On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models, while enabling interpretation in terms of high-level clinical concepts ("bone spurs") or bird attributes ("wing color"). These models also allow for richer human-model interaction: accuracy improves significantly if we can correct model mistakes on concepts at test time.) <|cite_end|> <|cite_start|> (Reference: Post-hoc Concept Bottleneck Models: Concept Bottleneck Models (CBMs) map the inputs onto a set of interpretable concepts (``the bottleneck'') and use the concepts to make predictions. A concept bottleneck enhances interpretability since it can be investigated to understand what concepts the model "sees" in an input and which of these concepts are deemed important. However, CBMs are restrictive in practice as they require dense concept annotations in the training data to learn the bottleneck. Moreover, CBMs often do not match the accuracy of an unrestricted neural network, reducing the incentive to deploy them in practice. In this work, we address these limitations of CBMs by introducing Post-hoc Concept Bottleneck models (PCBMs). We show that we can turn any neural network into a PCBM without sacrificing model performance while still retaining the interpretability benefits. When concept annotations are not available on the training data, we show that PCBM can transfer concepts from other datasets or from natural language descriptions of concepts via multimodal models. A key benefit of PCBM is that it enables users to quickly debug and update the model to reduce spurious correlations and improve generalization to new distributions. PCBM allows for global model edits, which can be more efficient than previous works on local interventions that fix a specific prediction. Through a model-editing user study, we show that editing PCBMs via concept-level feedback can provide significant performance gains without using data from the target domain or model retraining.) <|cite_end|> or descriptions <|cite_start|> (Reference: Visual Classification via Description from Large Language Models: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline.) <|cite_end|> to amplify image classification and gain interpretation, in this paper, we address the two clinical challenges in a unified framework through natural language concepts ( illustrated in~\cref{fig:teaser}.).
Specifically, we elicit medical knowledge from large language models (e.g., GPT-4) in a zero-shot manner to build a set of concepts, i.e., concise descriptors regarding each disease or pathology, and project visual features into the concept space using a vision-language model to connect two modalities, and finally classify medical images with the concept vector.
By doing so, we explicitly tell the model to learn desired features rather than possible spurious correlations, hence improving robustness while gaining interpretation.
We conduct experiments on eight datasets with case studies and human evaluation, and find several advantages with this new paradigm:
\begin{enumerate}
\item It is easy to build in an automatic way, with minimal human effort and medical expertise.
\item On challenging datasets with strong confounding factors, classification using concepts can alleviate spurious correlations and substantially improve classification performance: an average of 19\% accuracy improvement over using raw image features.
\item Even on popular benchmarks without explicit confounding factors, where the train and test set distributions are assumed to be the same, this new paradigm still attain competitive and sometimes even better performance than black-box visual encoders.
\item Moreover, we gain a level of interpretability through this framework, by associating images with a small number of relevant concepts contributing to classification.
\end{enumerate}
\vspace{-5pt}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/teaser.pdf}
\vspace{-30pt}
\caption{High level illustration of our new paradigm. It utilizes concepts for medical image classification to achieve interpretability and robustness while maintaining accuracy. \textbf{Left}: Classification with a visual encoder; \textbf{Right}: Classification with concepts. A Chest X-ray from a healthy old individual may be classified as Covid-19 because of the age, while our method can mitigate spurious correlation by classifying with clinical concepts.
}
\label{fig:teaser}
\vspace{-5pt}
\end{figure} <|paper_end|> | [
"<|reference_start|> Big Self-Supervised Models Advance Medical Image Classification: Self-supervised pretraining followed by supervised fine-tuning has seen success in image recognition, especially when labeled examples are scarce, but has received limited attention in medical image analysis. This paper studies the effectiveness of self-supervised learning as a pretraining strategy for medical image classification. We conduct experiments on two distinct tasks: dermatology skin condition classification from digital camera images and multi-label chest X-ray classification, and demonstrate that self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled domain-specific medical images significantly improves the accuracy of medical image classifiers. We introduce a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case, when available, to construct more informative positive pairs for self-supervised learning. Combining our contributions, we achieve an improvement of 6.7% in top-1 accuracy and an improvement of 1.1% in mean AUC on dermatology and chest X-ray classification respectively, outperforming strong supervised baselines pretrained on ImageNet. In addition, we show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images. <|reference_end|>",
"<|reference_start|> Transformers in Medical Imaging: A Survey: Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as {de facto} operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, reconstruction, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at \\url{https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging}. <|reference_end|>",
"<|reference_start|> Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer: <|reference_end|>",
"<|reference_start|> Concept Bottleneck Models: We seek to learn models that we can interact with using high-level concepts: if the model did not think there was a bone spur in the x-ray, would it still predict severe arthritis? State-of-the-art models today do not typically support the manipulation of concepts like \"the existence of bone spurs\", as they are trained end-to-end to go directly from raw input (e.g., pixels) to output (e.g., arthritis severity). We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label. By construction, we can intervene on these concept bottleneck models by editing their predicted concept values and propagating these changes to the final prediction. On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models, while enabling interpretation in terms of high-level clinical concepts (\"bone spurs\") or bird attributes (\"wing color\"). These models also allow for richer human-model interaction: accuracy improves significantly if we can correct model mistakes on concepts at test time. <|reference_end|>"
] | [
1,
2,
11,
13
] | {"<|multi_cite_1_1|>": "arxiv-116899", "<|multi_cite_1_2|>": "arxiv-314963", "<|multi_cite_1_3|>": "arxiv-394476", "<|multi_cite_1_4|>": "ss-2436246", "<|multi_cite_3_1|>": "arxiv-164380", "<|multi_cite_3_2|>": "ss-2436247", "<|cite_2|>": "arxiv-264429", "<|cite_5|>": "ss-1273438", "<|cite_6|>": "ss-746244", "<|multi_cite_7_1|>": "arxiv-209775", "<|multi_cite_7_2|>": "arxiv-248941", "<|multi_cite_7_3|>": "ss-1516319", "<|multi_cite_7_4|>": "arxiv-356685", "<|multi_cite_8_1|>": "arxiv-277352", "<|multi_cite_8_2|>": "arxiv-423577", "<|cite_9|>": "arxiv-453694"} |
2106.05846 | <|paper_start|> Title: Latent Space Arc Therapy Optimization
Abstract: Latent Space Arc Therapy Optimization: Volumetric modulated arc therapy planning is a challenging problem in high-dimensional, non-convex optimization. Traditionally, heuristics such as fluence-map-optimization-informed segment initialization use locally optimal solutions to begin the search of the full arc therapy plan space from a reasonable starting point. These routines facilitate arc therapy optimization such that clinically satisfactory radiation treatment plans can be created in about 10 minutes. However, current optimization algorithms favor solutions near their initialization point and are slower than necessary due to plan overparameterization. In this work, arc therapy overparameterization is addressed by reducing the effective dimension of treatment plans with unsupervised deep learning. An optimization engine is then built based on low-dimensional arc representations which facilitates faster planning times.
Introduction
\label{intro}
External beam radiotherapy (EBRT) is a very common modality in cancer treatment. In EBRT, patients rest on a treatment couch and are treated with a photon beam generated by a linear accelerator (linac). External beam therapy is ideal for both individual patients and entire hospital systems, since it is non-invasive, effective, and accommodates a relatively high patient throughput. One major consideration for EBRT treatment planning is photon attenuation in healthy tissues. The usual physical model for narrow beam photon attenuation is to suggest that the rate of change of photon fluence $\phi$ with respect to depth $x$ is directly proportional to the fluence at that depth, i.e.
\begin{equation*}
\begin{gathered}
\frac{d\phi}{dx} = - \mu \phi(x) \ , \\
\implies \phi(x) = \phi_0 e^{-\mu x} \ .
\end{gathered}
\end{equation*}
This model must be modified to describe the dose deposition of clinical radiotherapy beams, but the overall rapid attenuation with depth still applies <|cite_start|> (Reference: Khan's the physics of radiation therapy: Expand your understanding of the physics and practical clinical applications of advanced radiation therapy technologies with Khan's The Physics of Radiation Therapy, 5th edition, the book that set the standard in the field. This classic full-color text helps the entire radiation therapy team-radiation oncologists, medical physicists, dosimetrists, and radiation therapists-develop a thorough understanding of 3D conformal radiotherapy (3D-CRT), stereotactic radiosurgery (SRS), high dose-rate remote afterloaders (HDR), intensity modulated radiation therapy (IMRT), image-guided radiation therapy (IGRT), Volumetric Modulated Arc Therapy (VMAT), and proton beam therapy, as well as the physical concepts underlying treatment planning, treatment delivery, and dosimetry. In preparing this new Fifth Edition, Dr. Kahn and new co-author Dr. John Gibbons made chapter-by-chapter revisions in the light of the latest developments in the field, adding new discussions, a new chapter, and new color illustrations throughout. Now even more precise and relevant, this edition is ideal as a reference book for practitioners, a textbook for students, and a constant companion for those preparing for their board exams. Features: stay on top of the latest advances in the field with new sections and/or discussions of Image Guided Radiation Therapy (IGRT), Volumetric Modulated Arc Therapy (VMAT), and the Failure Mode Event Analysis (FMEA) approach to quality assurance; deepen your knowledge of Stereotactic Body Radiotherapy (SBRT) through a completely new chapter that covers SBRT in greater detail; expand your visual understanding with new full color illustrations that reflect current practice and depict new procedures; and access the authoritative information you need fast through the new companion website which features fully searchable text and an image bank for greater convenience in studying and teaching.) <|cite_end|>. To treat a deep target with external beam photon therapy, some radiation dose will be deposited in surrounding tissues. The physical limitations of radiation dose deposition motivate the practice of radiation treatment planning, during which customized radiation beams are manufactured to treat the target and minimize the risk of toxicities.
Radiation treatment planning has developed substantially over the course of a few decades. 3D conformal radiotherapy, a practice in which the target is simply cut out with collimators in the beam's eye view and treated, was considered a state of the art technique only three decades ago. In the 1990's, intensity modulated radiotherapy (IMRT) was designed under Brahme, Webb, Bortfeld, and Boyer, amongst others <|cite_start|> (Reference: Intensity-modulated radiation therapy: Intensity‐modulated radiation therapy (IMRT) has found widespread use in the treatment of head and neck cancers. This technology allows for conformal dose distributions around a tumor target while a rapid dose fall‐off spares surrounding critical structures. The properties of IMRT are particularly suited for treating head and neck cancers due to the close proximity of dose‐limiting normal tissues allowing for potential dose escalation. Further studies are ongoing to investigate long‐term clinical outcomes and toxicity. J. Surg. Oncol. 2008;97:691–696. © 2008 Wiley‐Liss, Inc.) <|cite_end|>. The development of IMRT marked a leap in radiotherapy technology. In IMRT treatment planning, 3D CRT beam shapes are decomposed into smaller beamlets (or bixels), and the intensity of those beamlets are mathematically optimized to suit the patients' dose distribution needs. Optimized fluence profiles are recreated with multileaf collimators (MLC) in the head of the linac, which use many narrow tungsten blocks (``leaves'') to create smooth edges and match structure boundaries in the beam's eye view.
Volumetric modulated arc therapy (VMAT) is an extension of IMRT. In VMAT, the treatment planner asserts that delivery should be performed continuously over an arc. Rather than creating a dose distribution based on a few discrete gantry angles, up to 180 co-planar beams are used to describe the radiation delivery of a sweeping arc (Figure \ref{vmatabc}c).
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.6\linewidth]{figures/vmat.pdf}
\caption[Arc Therapy Description]{\textbf{a)} 3D Conformal therapy uses uniform fluences at discrete gantry angles to treat targets at depth. \textbf{b)} In IMRT, modulated beam profiles are delivered with a series of apertures at each angle. \textbf{c)} VMAT delivers a series of co-planar beams in a single sweep of the linac's gantry.}
\label{vmatabc}
\end{figure}
The requirement that the beams for a single arc be co-planar and deliverable with one sweep of the gantry\footnote{In practice, two or three arcs with different collimator angles is common.} encourages rapid treatments, and the use of so many beams promotes target coverage and normal tissue sparing <|cite_start|> (Reference: Volumetric modulated arc therapy: IMRT in a single gantry arc: In this work a novel plan optimization platform is presented where treatment is delivered efficiently and accurately in a single dynamically modulated arc. Improvements in patient care achieved through image-guided positioning and plan adaptation have resulted in an increase in overall treatment times. Intensity-modulated radiation therapy (IMRT) has also increased treatment time by requiring a larger number of beam directions, increased monitor units (MU), and, in the case of tomotherapy, a slice-by-slice delivery. In order to maintain a similar level of patient throughput it will be necessary to increase the efficiency of treatment delivery. The solution proposed here is a novel aperture-based algorithm for treatment plan optimization where dose is delivered during a single gantry arc of up to 360 deg. The technique is similar to tomotherapy in that a full 360 deg of beam directions are available for optimization but is fundamentally different in that the entire dose volume is delivered in a single source rotation. The new technique is referred to as volumetric modulated arc therapy (VMAT). Multileaf collimator (MLC) leaf motion and number of MU per degree of gantry rotation is restricted during the optimization so that gantry rotation speed, leaf translation speed, and dose rate maxima do not excessively limit the delivery efficiency. During planning, investigators model continuous gantry motion by a coarse sampling of static gantry positions and fluence maps or MLC aperture shapes. The technique presented here is unique in that gantry and MLC position sampling is progressively increased throughout the optimization. Using the full gantry range will theoretically provide increased flexibility in generating highly conformal treatment plans. In practice, the additional flexibility is somewhat negated by the additional constraints placed on the amount of MLC leaf motion between gantry samples. A series of studies are performed that characterize the relationship between gantry and MLC sampling, dose modeling accuracy, and optimization time. Results show that gantry angle and MLC sample spacing as low as 1 deg and 0.5 cm, respectively, is desirable for accurate dose modeling. It is also shown that reducing the sample spacing dramatically reduces the ability of the optimization to arrive at a solution. The competing benefits of having small and large sample spacing are mutually realized using the progressive sampling technique described here. Preliminary results show that plans generated with VMAT optimization exhibit dose distributions equivalent or superior to static gantry IMRT. Timing studies have shown that the VMAT technique is well suited for on-line verification and adaptation with delivery times that are reduced to approximately 1.5-3 min for a 200 cGy fraction.) <|cite_end|> <|cite_start|> (Reference: Volumetric modulated arc therapy: a review of current literature and clinical use in practice: Volumetric modulated arc therapy (VMAT) is a novel radiation technique, which can achieve highly conformal dose distributions with improved target volume coverage and sparing of normal tissues compared with conventional radiotherapy techniques. VMAT also has the potential to offer additional advantages, such as reduced treatment delivery time compared with conventional static field intensity modulated radiotherapy (IMRT). The clinical worldwide use of VMAT is increasing significantly. Currently the majority of published data on VMAT are limited to planning and feasibility studies, although there is emerging clinical outcome data in several tumour sites. This article aims to discuss the current use of VMAT techniques in practice and review the available data from planning and clinical outcome studies in various tumour sites including prostate, pelvis (lower gastrointestinal, gynaecological), head and neck, thoracic, central nervous system, breast and other tumour sites.) <|cite_end|>. The practice of arc therapy is supported by many clinical trials and retrospective analyses <|cite_start|> (Reference: Volumetric modulated arc therapy (VMAT) vs. serial tomotherapy, step-and-shoot IMRT and 3D-conformal RT for treatment of prostate cancer.: ) <|cite_end|> <|cite_start|> (Reference: Assessing the role of volumetric modulated arc therapy (VMAT) relative to IMRT and helical tomotherapy in the management of localized, locally advanced, and post-operative prostate cancer.: ) <|cite_end|> <|cite_start|> (Reference: Treatment and dosimetric advantages between VMAT, IMRT, and helical tomotherapy in prostate cancer.: ) <|cite_end|>. Despite the many clinical benefits of VMAT over conventional IMRT, the treatment planning software must manage significantly more computational complexity.
\subsection{Optimization Formalism}\label{convoptrout}
VMAT plans are specified by beam weights and leaf positions for about 160 leaves for around 180 control points throughout a delivery, depending on the linear accelerator and treatment planning preferences. The task of VMAT plan creation is then an optimization problem with order-10,000 variables whose objective is defined by complicated radiation transport. A formal statement of the VMAT optimization problem was laid out in detail by Unkelbach \textit{et al.} in 2015 <|cite_start|> (Reference: Optimization approaches to volumetric modulated arc therapy planning: Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.) <|cite_end|>. In this formulation, VMAT plans are parameterized as a collection of discrete apertures distributed over the arc. We use the array $x^\phi_{ij}$ to represent the transmission of beamlets $ij$ at gantry angle $\phi$, where each beamlet represents a small rectangular area in the beam's eye view. For a specified machine and patient geometry, we can numerically evaluate the dose at voxel $k$ due to one monitor unit (MU) through beamlet $ij$ and store the results in a dose influence matrix $D^\phi_{ijk}$. During the optimization procedure, the dose calculation for voxel $k$ is reduced to a simple summation,
\begin{equation}\label{dosesum}
d_k = \sum_\phi \sum_{ij} D^\phi_{ijk} x^\phi_{ij} \ .
\end{equation}
In the context of direct aperture optimization (DAO), apertures are specified by the position of MLC leaves on the left and right banks of the collimator <|cite_start|> (Reference: Direct aperture optimization: a turnkey solution for step-and-shoot IMRT: IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.) <|cite_end|> <|cite_start|> (Reference: Inverse planning for intensity-modulated arc therapy using direct aperture optimization: Intensity-modulated arc therapy (IMAT) is a radiation therapy delivery technique that combines gantry rotation with dynamic multi-leaf collimation (MLC). With IMAT, the benefits of rotational IMRT can be realized using a conventional linear accelerator and a conventional MLC. Thus far, the advantages of IMAT have gone largely unrealized due to the lack of robust automated planning tools capable of producing efficient IMAT treatment plans. This work describes an inverse treatment planning algorithm, called ‘direct aperture optimization’ (DAO) that can be used to generate inverse treatment plans for IMAT. In contrast to traditional inverse planning techniques where the relative weights of a series of pencil beams are optimized, DAO optimizes the leaf positions and weights of the apertures in the plan. This technique allows any delivery constraints to be enforced during the optimization, eliminating the need for a leaf-sequencing step. It is this feature that enables DAO to easily create inverse plans for IMAT. To illustrate the feasibility of DAO applied to IMAT, several cases are presented, including a cylindrical phantom, a head and neck patient and a prostate patient.) <|cite_end|>. If we take the second index of $x$ to be the leaf indexing direction, we can write
\begin{equation}\label{apdef}
x^\phi_{ij} = y^\phi [\Theta(i - l^\phi_j) - \Theta(i - r^\phi_j)]] \ ,
\end{equation}
where $\Theta$ is the Heaviside step function, $y^\phi$ is the beam weighting (MUs delivered), and $r^\phi_j$ and $l^\phi_j$ are the right and left leaf end index positions for leaf $j$ at control point $\phi$. This is to say that beamlet $ij$ is set to $y^\phi$ if it falls between $l^\phi_j$ and $r^\phi_j$ and zero otherwise.
In this case, we can write out the full optimization problem as such: for dosimetric objective function $f(d)$, return
\begin{equation}
y^*, l^*, r^* = \argmin_{y, l, r} f(d; y, l, r)
\end{equation}
subject to
\begin{equation}
d_k = \sum_\phi \sum_{ij} D^\phi_{ijk} x^\phi_{ij}
\end{equation}
\begin{equation}
x^\phi_{ij} = y^\phi [H(i - l^\phi_j) - H(i - r^\phi_j)]
\end{equation}
\begin{equation}
y^\phi \geq 0
\end{equation}
\begin{equation}\label{nonabutting}
l^\phi_j \leq r^\phi_j
\end{equation}
The last two constraints require that aperture intensities are all non-negative and MLC leaves are non-abutting. Notice also that the resolution of the beamlets in the leaf-indexing direction is limited by the leaf width.
\subsubsection{Traditional Approaches to Arc Therapy Optimization}\label{tradsection}
There are two popular approaches to VMAT optimization which have been implemented clinically. These algorithms solve VMAT with satisfactory accuracy but are often slower than desired. An outline of these algorithms is provided below.
\textit{Global DAO with geometry-based segment initialization:} This algorithm was pioneered by Earl \textit{et al.} in 2003 and largely remains unchanged in modern clinical implementations <|cite_start|> (Reference: Inverse planning for intensity-modulated arc therapy using direct aperture optimization: Intensity-modulated arc therapy (IMAT) is a radiation therapy delivery technique that combines gantry rotation with dynamic multi-leaf collimation (MLC). With IMAT, the benefits of rotational IMRT can be realized using a conventional linear accelerator and a conventional MLC. Thus far, the advantages of IMAT have gone largely unrealized due to the lack of robust automated planning tools capable of producing efficient IMAT treatment plans. This work describes an inverse treatment planning algorithm, called ‘direct aperture optimization’ (DAO) that can be used to generate inverse treatment plans for IMAT. In contrast to traditional inverse planning techniques where the relative weights of a series of pencil beams are optimized, DAO optimizes the leaf positions and weights of the apertures in the plan. This technique allows any delivery constraints to be enforced during the optimization, eliminating the need for a leaf-sequencing step. It is this feature that enables DAO to easily create inverse plans for IMAT. To illustrate the feasibility of DAO applied to IMAT, several cases are presented, including a cylindrical phantom, a head and neck patient and a prostate patient.) <|cite_end|>. MLC leaf positions are initialized based on the shape of targets and OARs in the beam's eye view. Following initialization, some stochastic search algorithm such as simulated annealing (SA) or a genetic algorithm is used to search the full VMAT plan space <|cite_start|> (Reference: Segment-based dose optimization using a genetic algorithm: Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning.) <|cite_end|>. Stochastic algorithms such as these are constructed to allow navigation of local optima, so a poor initialization does not significantly harm the accuracy of the solution. The solution state is iteratively adjusted until the dose constraints are reasonably satisfied. This optimization structure is implemented in Varian's RapidArc.
\textit{Local DAO with FMO-informed segment initialization:} Bzdusek \textit{et al.} developed this approach to introduce gradient information in the VMAT search process <|cite_start|> (Reference: Development and evaluation of an efficient approach to volumetric arc therapy planning: An efficient method for volumetric intensity modulated arc therapy (VMAT) planning was developed, where a single arc (360 degrees or less) is delivered under continuous variation of multileaf collimator (MLC) segments, dose rate, and gantry speed. Plans can be generated for any current linear accelerator that supports these degrees of freedom. MLC segments are derived from fluence maps at relatively coarsely sampled angular positions. The beam segments, dose rate, and gantry speed are then optimized using direct machine parameter optimization based on dose volume objectives and leaf motion constraints to minimize arc delivery time. The method can vary both dose rate and gantry speed or alternatively determine the optimal plan at constant dose rate and gantry speed. The method was used to retrospectively generate variable dose rate VMAT plans to ten patients (head and neck, prostate, brain, lung, and tonsil). In comparison to step-and-shoot intensity modulated radiation therapy, dosimetric plan quality was comparable or improved, estimated delivery times ranged from 70 to 160 s, and monitor units were consistently reduced in nine out of the ten cases by an average of approximately 6%. Optimization and final dose calculation took between 5 and 35 min depending on plan complexity.) <|cite_end|>. The initialization step in this algorithm is inspired by conventional IMRT optimization. Apertures at each control point are individually optimized with convex, gradient-based fluence map optimization (FMO). Following the individual optimizations, some arc sequencing procedure is used to represent the optimized fluences with physically deliverable MLC positions and beam weights. The overall solution state at this point is expected to be at a decent accuracy, from which gradient-based DAO can take over. Variations of this algorithm have been implemented in treatment planning systems such as SmartArc (Philips), Oncentra VMAT (Nucletron), RayArc (RaySearch Laboratories), and Monaco (Elekta) <|cite_start|> (Reference: Optimization approaches to volumetric modulated arc therapy planning: Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.) <|cite_end|> <|cite_start|> (Reference: Initial dosimetric evaluation of SmartArc--a novel VMAT treatment planning module implemented in a multi-vendor delivery chain: We performed an initial dosimetric evaluation of SmartArc – a novel VMAT planning module for the Philips Pinnacle treatment planning system. It was implemented in a multi‐vendor environment, with the other two major components of the delivery chain being MOSAIQ record and verify system (IMPAC Medical Systems, Sunnyvale, CA) and a Trilogy linac (Varian Medical Systems, Palo Alto, CA). A test suite of structure sets and dose objectives provided by the AAPM for multi‐institutional comparison of IMRT dosimetry was used. A total of fifty plans were successfully delivered. The effect of control point spacing on dosimetric accuracy was investigated. When calculated with the 4° spacing, the overall mean point dose errors measured with an ion chamber were 0.5±1.4and −0.3±1.4% for the PTV and OAR, respectively. The γ(3%, 3 mm) passing rate, measured for absolute dose with a biplanar diode array, was 98.2±1.6% (range 94.5–99.9%). Ninety percent of the passing rate values were above 97.7%. With the 6° control point spacing, the highly modulated plans exhibited large dosimetric errors (e.g. γ(3%, 3 mm) passing rates below 90% and ion chamber point dose errors of 6–12%), while the results were still acceptable for the simpler cases. The data show that the practical accuracy of the small‐arc approximation, which is at the heart of VMAT dose calculations, depends not only on the control point spacing, but also on the size and relative position of the MLC openings corresponding to the consecutive control points. The effect of the minimum allowed separation between the opposing leaves was found to be minimal. It appears that 4° control point spacing may be a good compromise between calculation speed and accuracy. However each institution is encouraged to establish its own treatment planning guidelines based on the case complexity and acceptable error level. PACS number: 87.55Qr) <|cite_end|>. These two algorithms are represented visually in Figure \ref{traditionalalgs}.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.5\linewidth]{figures/convalgs.pdf}
\caption[VMAT Optimization with Traditional Algorithms]{Global DAO with geometry-informed segment initialization uses a simple initialization which is refined with stochastic DAO. Local DAO with FMO-informed segment initialization uses a decent initialization with optimized fluences at individual control points, then refines the solution with a gradient-based search. Notice that neither of these algorithms are expected to achieve the true global optimum.}
\label{traditionalalgs}
\end{figure}
Despite their desirable tumor coverage and normal tissue sparing, VMAT plans are problematically slow to create. Performing a single iteration of the VMAT planning process with modern computing equipment can take up to 10 minutes, and creating an acceptable plan often requires multiple iterations of dosimetrists adjusting clinical objectives <|cite_start|> (Reference: Multi-GPU implementation of a VMAT treatment plan optimization algorithm: PURPOSE
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics.
METHODS
The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method.
RESULTS
The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes.
CONCLUSIONS
The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.) <|cite_end|>. Because of this time constraint, VMAT planning is a major clinical bottleneck. Additionally, creating multiple treatment plans per patient, as is required in adaptive radiotherapy, is not logistically feasible (at least on a large scale). The average radiotherapy patient must have their VMAT plan created in advance, and therefore incurs the risk of overdosing healthy tissue while underdosing the target. We suspect that VMAT creation is slow for the same reason it is so successful; the variety of possible treatment plans is so vast that searching for an appropriate plan is almost prohibitive.
\subsection{Unsupervised Learning and VAEs}
The purpose of unsupervised learning algorithms is to identify and exploit structure within data. The textbook example of unsupervised machine learning is principal component analysis (PCA). In PCA, the directions of maximum variation within the training data are identified with singular value decomposition <|cite_start|> (Reference: Principal Component Analysis: Principal component analysis (PCA) can be applied to vectorial data and is probably the most common method to reduce the dimensionality of data for compression and visualization. It determines the dimensions of largest and smallest variance of the data, referred to as the principal components, which can then be used to discard the small variance dimensions for dimensionality reduction or select the two or three largest variance dimensions for visualization. For instance, if you have one thousand 100-dimensional data points, PCA might be used to reduce the dimensionality of the data down to 10 without loosing too much information, which corresponds to a compression by 90%. In some cases it is also the small variance directions that are of interest. PCA can also be used to normalize data such, that it has unit variance in all directions, a process called whitening or sphering, to eliminate correlations.) <|cite_end|>. In 1991, Turk and Pentland famously developed a primitive facial recognition software based on PCA <|cite_start|> (Reference: Eigenfaces for recognition: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.) <|cite_end|>. In their work, they identify the first $N$ principal components of a facial image dataset and use those images to define basis vectors in a low-dimensional ``face space.'' Rather than characterizing new facial images with the regular pixel representation, images are projected into the principal subspace for comparison. With this technique, low-level information such as image noise texture is generally ignored, and distinct images of the same person's face will have similar face space coordinates following their projection. Notice that this method, as opposed to supervised techniques, does not require the person which is being identified by the software to exist within the training dataset. PCA learns the ``interesting'' directions in the high-dimensional image space which can be applied to new images at any time.
The canonical example of unsupervised learning with neural networks is deep autoencoders (AE) <|cite_start|> (Reference: Deep Learning: Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.) <|cite_end|>. Similar to PCA, the objective of training an autoencoder is to learn an efficient method of representing high-dimensional data with low-dimensional coordinates. The task required of AEs is to compress the training data into a specified number of dimensions and reconstruct them accurately (Figure \ref{catsvae}a). Formally, we require that model weights minimize reconstruction loss which is often implemented as the mean squared error between training examples $x^{(j)}$ and their reconstructions $\hat{x}^{(j)}(w) = \mbox{AE}(x^{(j)}; w)$,
\begin{equation}
L(w) = \frac{1}{2N} \sum_{j=1}^N \sum_{i} (x_i^{(j)} - \hat{x}_i^{(j)}(w))^2 \ .
\end{equation}
Intuitively, AEs are required to learn the most efficient way to encode the data they are shown. Autoencoders' mechanisms for compression are usually fit to suit the training data; for example, image AEs might use many convolutional layers, while natural language AEs might use recurrent units.
\begin{figure}[!hbt]
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ae_fig.pdf}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vae_fig.pdf}
\end{subfigure}
\caption[Variational Autoencoders]{Autoencoders are a standard deep learning model for dimensionality reduction. \textbf{a)} An encoder and decoder are used to compress the image and reconstruct it. \textbf{b)} Variational autoencoders map images to a ``cloud'' in the latent space, specified by coordinates and spreads. VAEs generally have better sampling properties for image generation than traditional AEs.}
\label{catsvae}
\end{figure}
One utility of unsupervised learning is for generative modeling. Once an autoencoder has learned the directions of significant variation in the data space, latent variables can be randomly generated and transformed into convincing examples in the input space. While conventional autoencoders are practically easy to construct and train, they are often inadequate for generating new high-dimensional data. For these AEs, we simply expect the models to learn generalization implicitly by showing them a high volume of training examples; there is no requirement for the models to organize the latent space efficiently. Consequentially, two input data points which we might consider semantically similar, such as two images of the same person or cat, might be mapped to very different positions in the latent space with no effect on the overall loss function. Certain properties which are good for image synthesis, such as the ability to interpolate between points in the low dimensional space, are not ensured in the conventional autoencoder framework. These issues are however addressed by variational autoencoders (VAE).
VAEs are structurally similar to conventional AEs but introduce an extra step following compression. Rather than mapping each input to a specific point, images are mapped to entire regions of the low-dimensional space. For an $n$-dimensional latent space, each compressed data point is specified by an $n$-dimensional Gaussian having $n$ means and standard deviations. When training the model, the Gaussians are sampled from to find some low-dimensional point which is used to create a reconstructed image. This process ensures that the space between the mapped means is semantically meaningful and might be used to generate new, realistic images <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|>.
Additionally, during VAE training we require that the compressed data are organized nicely; they are not crowded in one region of the latent space with the rest of the space being semantically meaningless. This is encouraged by using a Kullbach-Leibler (KL) divergence term in the loss function <|cite_start|> (Reference: {On Information and Sufficiency: The information deviation between any two finite measures cannot be increased by any statistical operations (Markov morphisms). It is invarient if and only if the morphism is sufficient for these two measures) <|cite_end|>. The KL divergence is used to measure the dissimilarity between a distribution of compressed inputs and an $n$-dimensional Gaussian centered at zero. For compressed data $\{Q^{(j)}\}$, the VAE loss can be written
\begin{equation}\label{vaelossfn}
L(w) = \bigg[\frac{1}{2N} \sum_{j=1}^N \sum_{i} (x_i^{(j)} - \hat{x}_i^{(j)}(w))^2\bigg] + \alpha \bigg[\frac{1}{2} \sum_{k=1}^{dim(Q)} \log \sigma_{Q_k}^2 - \sigma_{Q_k}^2 - \mu_{Q_k}^2 + 1\bigg] \ ,
\end{equation}
where $\alpha$ is a hyperparameter determining the relative contribution of the KL term.
Besides AEs and VAEs there are many other popular deep unsupervised learning methods which can be used for dimensionality reduction. One very common method is with generative adversarial networks (GAN) <|cite_start|> (Reference: Generative Adversarial Networks: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.) <|cite_end|>. GANs, however, are famously unstable during training, so for this work we focus on autoencoders exclusively <|cite_start|> (Reference: On Convergence and Stability of GANs: We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.) <|cite_end|>. Beyond GANs, there are several other mechanisms for dimensionality reduction including flows and hybrid approaches such as adversarial autoencoders <|cite_start|> (Reference: Glow: Generative Flow with Invertible 1x1 Convolutions: Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow) <|cite_end|> <|cite_start|> (Reference: Adversarial Autoencoders: In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial autoencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization. We performed experiments on MNIST, Street View House Numbers and Toronto Face datasets and show that adversarial autoencoders achieve competitive results in generative modeling and semi-supervised classification tasks.) <|cite_end|>. For future implementations of this work, we expect that these models might better describe the practical VMAT subspace and are worth exploring.
\subsection{Arc Therapy and the Curse of Dimensionality}
In previous implementations of arc therapy optimization the problem is addressed in the fully-parameterized arc planning space. This is problematic because the size of the search space for any optimization problem grows with the power of the number of optimization parameters. For VMAT optimization, a typical optimizer might be required to select 80 leaf positions on 2 MLC banks over the duration of 80 control points distributed throughout the arc. This yields $80*2*80=12,800$ optimization parameters. If $N$ discrete positions are allowed for every leaf, the optimizer must navigate a space of $N^{12,800}$ unique VMAT plans. Of course, modern VMAT algorithms are more sophisticated than a brute force search of $100^{12,800} = 10^{25,600}$ values, or whatever $N$ is allowed. Clever initializations and gradient-based searches such as those in the FMO-informed algorithm provide solutions in relatively satisfactory accuracy and time. Yet the problem of effectively navigating the solution space persists; gradient-based methods are confined to the region in the neighborhood of their initialization point while global stochastic methods are inefficient in searching such a large space.
Rather than suggesting a better initialization or search routine in the fully-parameterized VMAT space, we would like to simplify the problem altogether with pattern-based dimensionality reduction. In the proposed framework, we reduce the effective dimension of arc therapy data prior to plan optimization to facilitate a more computationally efficient search. In doing so, the optimizer works in a space with inherently fewer local optima and is less susceptible to trapping. Additionally, the volume of the learned space is significantly smaller, only permitting $N^d$ possible solutions for latent space dimension $d$. In this setting, the optimizer is limited to only consider regions of the full space where VMAT plans have existed historically.
When talking about AEs, and particularly VAEs, it is common to describe the high-dimensional data generating distribution as a manifold\footnote{Although the term ``manifold'' has a precise definition in mathematics, it is used rather loosely in the context of machine learning. However, the central concept of the space being \textit{connected} is shared between the two communities.}. While it is useful to suggest that VMAT data are simply overparameterized, it carries much more weight to postulate that the data populate a small, connected region of the VMAT plan space which is characteristic of a manifold (Figure \ref{manifold}).
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.35\linewidth]{manifold.pdf}
\caption[Arc Manifold]{Here, the raw VMAT data is symbolized with points described by three coordinates. While these values may be necessary for the physical description of machine parameters, all of the interesting variation happens along one direction specified by a spiral. We conjecture that historical arc therapy data exists on a connected surface like this, whose coordinates we might learn with an autoencoder algorithm.}
\label{manifold}
\end{figure}
This is regularly an implicit hypothesis when using any sort of machine learning dimensionality reduction. The proposition that arc therapy solutions inhabit a low-dimensional manifold embedded within the space that we use describe them appears to be a novelty of this work. We cite two sources of evidence for this conjecture.
First, arc data is densely distributed in the full planning space. Sampling a random point in 12,800-dimensional space and finding a clinically reasonable arc is no more likely than finding an image of a cat\footnote{We have no extra information about the relative size of the cat and arc therapy subspaces, but we imagine they are comparable.}. The region inhabited by clinically viable VMAT arcs represents only a small portion of the total solution space. This is shown in Figure \ref{datasetfig}.
Second, we expect that the space between example plans is occupied by other reasonable plans. Additionally, we anticipate that VMAT solutions for alike disease sites and anatomies will be similarly connected. For example, the bladder-empty and bladder-full regions of planning space should be connected with a range of reasonable bladder-half-full plans. This particular property is exploited by VAEs, where the boundaries of compressed training examples are blurred by the stochastic encoding procedure.
\subsection{Latent Space Optimization in Literature}
The use of compressed representations of data often reduces computational and memory requirements. One relevant use of VAEs by Gomez-Bombarelli \textit{et al.} was to facilitate drug discovery <|cite_start|> (Reference: Automatic chemical design using a data-driven continuous representation of molecules: We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in the set of molecules with fewer that nine heavy atoms.) <|cite_end|>. In their framework, discrete representations of molecules are compressed using a deep AE, and the learned continuous representations are used to craft molecules with desirable properties (Figure \ref{drugdiscovery}). This technique has also been used to facilitate synthetic gene design, which faces the same problems of high-dimensionality and non-convexity as drug synthesis and VMAT optimization <|cite_start|> (Reference: Bayesian optimization for synthetic gene design: We address the problem of synthetic gene design using Bayesian optimization. The main issue when designing a gene is that the design space is defined in terms of long strings of characters of different lengths, which renders the optimization intractable. We propose a three-step approach to deal with this issue. First, we use a Gaussian process model to emulate the behavior of the cell. As inputs of the model, we use a set of biologically meaningful gene features, which allows us to define optimal gene designs rules. Based on the model outputs we define a multi-task acquisition function to optimize simultaneously severals aspects of interest. Finally, we define an evaluation function, which allow us to rank sets of candidate gene sequences that are coherent with the optimal design strategy. We illustrate the performance of this approach in a real gene design experiment with mammalian cells.) <|cite_end|>.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.6\linewidth]{figures/drugdiscovery.png}
\caption[Drug Discovery with AEs]{Gomez-Bombarelli \textit{et al.} suggest the drug discovery framework pictured here. \textbf{a)} A deep autoencoder is used to establish a low-dimensional continuous representation of molecules. \textbf{b)} The latent space can be navigated with an optimization algorithm to satisfy some objective $f$. Figure reproduced with permission from <|cite_start|> (Reference: Automatic chemical design using a data-driven continuous representation of molecules: We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in the set of molecules with fewer that nine heavy atoms.) <|cite_end|>.}
\label{drugdiscovery}
\end{figure}
Latent space optimization has also been studied in more formal settings such as in the works of Lu \textit{et al.}, Moriconi \textit{et al.} and Raponi \textit{et al.} <|cite_start|> (Reference: Structured variationally auto-encoded optimization: We tackle the problem of optimizing a black-box objective function defined over a highly-structured input space. This problem is ubiquitous in machine learning. Inferring the structure of a neural network or the Automatic Statistician (AS), where the kernel combination for a Gaussian process is optimized, are two of many possible examples. We use the AS as a case study to describe our approach, that can be easily generalized to other domains. We propose an Structure Generating Variational Auto-encoder (SG-VAE) to embed the original space of kernel combinations into some low-dimensional continuous manifold where Bayesian optimization (BO) ideas are used. This is possible when structural knowledge of the problem is available, which can be given via a simulator or any other form of generating potentially good solutions. The right exploration-exploitation balance is imposed by propagating into the search the uncertainty of the latent space of the SG-VAE, that is computed using variational inference. The key aspect of our approach is that the SG-VAE can be used to bias the search towards relevant regions, making it suitable for transfer learning tasks. Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work.) <|cite_end|> <|cite_start|> (Reference: High-dimensional Bayesian optimization using low-dimensional feature spaces: Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-box functions and has proven successful for fine tuning hyper-parameters of machine learning models. However, BO is practically limited to optimizing 10--20 parameters. To scale BO to high dimensions, we usually make structural assumptions on the decomposition of the objective and\slash or exploit the intrinsic lower dimensionality of the problem, e.g. by using linear projections. We could achieve a higher compression rate with nonlinear projections, but learning these nonlinear embeddings typically requires much data. This contradicts the BO objective of a relatively small evaluation budget. To address this challenge, we propose to learn a low-dimensional feature space jointly with (a) the response surface and (b) a reconstruction mapping. Our approach allows for optimization of BO's acquisition function in the lower-dimensional subspace, which significantly simplifies the optimization problem. We reconstruct the original parameter space from the lower-dimensional subspace for evaluating the black-box function. For meaningful exploration, we solve a constrained optimization problem.) <|cite_end|> <|cite_start|> (Reference: High Dimensional Bayesian Optimization Assisted by Principal Component Analysis: Bayesian Optimization (BO) is a surrogate-assisted global optimization technique that has been successfully applied in various fields, e.g., automated machine learning and design optimization. Built upon a so-called infill-criterion and Gaussian Process regression (GPR), the BO technique suffers from a substantial computational complexity and hampered convergence rate as the dimension of the search spaces increases. Scaling up BO for high-dimensional optimization problems remains a challenging task. In this paper, we propose to tackle the scalability of BO by hybridizing it with a Principal Component Analysis (PCA), resulting in a novel PCA-assisted BO (PCA-BO) algorithm. Specifically, the PCA procedure learns a linear transformation from all the evaluated points during the run and selects dimensions in the transformed space according to the variability of evaluated points. We then construct the GPR model, and the infill-criterion in the space spanned by the selected dimensions. We assess the performance of our PCA-BO in terms of the empirical convergence rate and CPU time on multi-modal problems from the COCO benchmark framework. The experimental results show that PCA-BO can effectively reduce the CPU time incurred on high-dimensional problems, and maintains the convergence rate on problems with an adequate global structure. PCA-BO therefore provides a satisfactory trade-off between the convergence rate and computational efficiency opening new ways to benefit from the strength of BO approaches in high dimensional numerical optimization.) <|cite_end|>. It has been shown that optimization within low-dimensional learned spaces can outperform traditional techniques, especially for very high-dimensional problems. <|paper_end|> | [
"<|reference_start|> Treatment and dosimetric advantages between VMAT, IMRT, and helical tomotherapy in prostate cancer.: <|reference_end|>",
"<|reference_start|> On Convergence and Stability of GANs: We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions. <|reference_end|>",
"<|reference_start|> Automatic chemical design using a data-driven continuous representation of molecules: We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in the set of molecules with fewer that nine heavy atoms. <|reference_end|>",
"<|reference_start|> High-dimensional Bayesian optimization using low-dimensional feature spaces: Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-box functions and has proven successful for fine tuning hyper-parameters of machine learning models. However, BO is practically limited to optimizing 10--20 parameters. To scale BO to high dimensions, we usually make structural assumptions on the decomposition of the objective and\\slash or exploit the intrinsic lower dimensionality of the problem, e.g. by using linear projections. We could achieve a higher compression rate with nonlinear projections, but learning these nonlinear embeddings typically requires much data. This contradicts the BO objective of a relatively small evaluation budget. To address this challenge, we propose to learn a low-dimensional feature space jointly with (a) the response surface and (b) a reconstruction mapping. Our approach allows for optimization of BO's acquisition function in the lower-dimensional subspace, which significantly simplifies the optimization problem. We reconstruct the original parameter space from the lower-dimensional subspace for evaluating the black-box function. For meaningful exploration, we solve a constrained optimization problem. <|reference_end|>"
] | [
6,
22,
25,
29
] | {"<|cite_1|>": "ss-2393325", "<|cite_2|>": "ss-2393326", "<|multi_cite_3_1|>": "ss-1853728", "<|multi_cite_3_2|>": "ss-2393327", "<|multi_cite_4_1|>": "ss-2393328", "<|multi_cite_4_2|>": "ss-2393329", "<|multi_cite_4_3|>": "ss-2393330", "<|cite_5|>": "ss-2393331", "<|multi_cite_6_1|>": "ss-1853731", "<|multi_cite_6_2|>": "ss-1853732", "<|cite_7|>": "ss-1853732", "<|cite_8|>": "ss-2393332", "<|cite_10|>": "ss-2393333", "<|multi_cite_11_1|>": "ss-2393331", "<|multi_cite_11_2|>": "ss-2393334", "<|cite_12|>": "ss-2393335", "<|cite_13|>": "ss-1291963", "<|cite_14|>": "ss-1294849", "<|cite_15|>": "arxiv-166644", "<|cite_16|>": "arxiv-54350", "<|cite_17|>": "ss-764705", "<|cite_18|>": "arxiv-62064", "<|cite_19|>": "arxiv-124599", "<|multi_cite_20_1|>": "arxiv-165197", "<|multi_cite_20_2|>": "arxiv-87418", "<|cite_21|>": "arxiv-107454", "<|cite_22|>": "ss-1544690", "<|cite_23|>": "arxiv-107454", "<|multi_cite_24_1|>": "ss-1181032", "<|multi_cite_24_2|>": "arxiv-193248", "<|multi_cite_24_3|>": "arxiv-275869"} |
1308.4648 | <|paper_start|> Title: PACE: Pattern Accurate Computationally Efficient Bootstrapping for Timely Discovery of Cyber-Security Concepts
Abstract: PACE: Pattern Accurate Computationally Efficient Bootstrapping for Timely Discovery of Cyber-Security Concepts: Public disclosure of important security information, such as knowledge of vulnerabilities or exploits, often occurs in blogs, tweets, mailing lists, and other online sources months before proper classification into structured databases. In order to facilitate timely discovery of such knowledge, we propose a novel semi-supervised learning algorithm, PACE, for identifying and classifying relevant entities in text sources. The main contribution of this paper is an enhancement of the traditional bootstrapping method for entity extraction by employing a time-memory trade-off that simultaneously circumvents a costly corpus search while strengthening pattern nomination, which should increase accuracy. An implementation in the cyber-security domain is discussed as well as challenges to Natural Language Processing imposed by the security domain.
Introduction
\label{intro}
This paper introduces PACE, a novel bootstrapping algorithm for entity extraction, and an application to cyber-security where domain concepts involving vulnerabilities and exploits are learned from public text sources.
Often vulnerabilities and exploits are discussed in a variety of obscure yet publicly accessible websites such as mailing lists, blogs, and twitter feeds, long before proper classification into well-known, commonly referenced databases such as the National Vulnerability Database (NVD), Common Vulnerability Enumeration (CVE), Open Source Vulnerability Database (OSVBD), Exploit-DB, and also before vendor patches or mitigations are released\footnote{\url{http://nvd.nist.gov/}, \url{http://cve.mitre.org/}, \url{http://www.osvdb.org/}, \url{http://www.exploit-db.com/}}.
As this valuable information is often buried in the world-wide web, our overall goal is to automatically obtain this knowledge by extracting entities from appropriate text sources, with a target audience of security analysts.
While supervised methods for identifying and classifying entities have experienced very accurate results, this paper explores a semi-supervised technique, as no labeled training data in the cyber-security domain is available.
In order to ensure the appropriate concepts are learned, semi-supervised entity extraction, which almost exclusively is some form of bootstrapping <|cite_start|> (Reference: A survey of named entity recognition and classification: This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress.) <|cite_end|> is used with a small hand-labeled training set.
In particular, our algorithm, PACE, modifies the traditional bootstrapping approach by storing contextual information with known entity names.
The benefits of this new technique are multi-fold.
Specifically, as patterns are only learned from the contextual instances observed with known entities, PACE allows more accurate pattern nomination than previous bootstrapping methods.
Secondly, it obviates the need for extremely large corpora, allowing PACE to be deployed in an operational setting where documents are streamed into the corpus under analysis and are discarded from the corpus after a fixed time.
Lastly, PACE uses a time-memory trade-off to circumvent a traversal of the corpus previously necessary for pattern nomination.
Related Work
\label{background}
Previous work in the intersection of Natural Language Processing for understanding cyber-security concepts has been undertaken. In <|cite_start|> (Reference: Extracting Information about Security Vulnerabilities from Web Text: The Web is an important source of information about computer security threats, vulnerabilities and cyber attacks. We present initial work on developing a framework to detect and extract information about vulnerabilities and attacks from Web text. Our prototype system uses Wikitology, a general purpose knowledge base derived from Wikipedia, to extract concepts that describe specific vulnerabilities and attacks, map them to related concepts from DBpedia and generate machine understandable assertions. Such a framework will be useful in adding structure to already existing vulnerability descriptions as well as detecting new ones. We evaluate our approach against vulnerability descriptions from the National Vulnerability Database. Our results suggest that it can be useful in monitoring streams of text from social media or chat rooms to identify potential new attacks and vulnerabilities or to collect data on the spread and volume of existing ones.) <|cite_end|> a combination of databases, Wikipedia, and ``off-the-shelf'' tools are used to identify and classify vulnerability entities.
Very recent work of <|cite_start|> (Reference: Extracting Cybersecurity Related Linked Data from Text: The Web is typically our first source of information about new software vulnerabilities, exploits and cyber-attacks. Information is found in semi-structured vulnerability databases as well as in text from security bulletins, news reports, cyber security blogs and Internet chat rooms. It can be useful to cyber security systems if there is a way to recognize and extract relevant information and represent it as easily shared and integrated semantic data. We describe such an automatic framework that generates and publishes a RDF linked data representation of cyber security concepts and vulnerability descriptions extracted from the National Vulnerability Database and from text sources. A CRF-based system is used to identify cybersecurity-relatedentities, concepts and relations in text, which are then represented using custom ontologies for the cyber security domain and also mapped to objects in the DBpedia knowledge base. The resulting cyber security linked data collection can be used for many purposes, including automating early vulnerability identification, mitigation and prevention efforts.) <|cite_end|> address supervised learning for entity extraction in cyber-security, by hand-labeling a small corpus of training data and using an ``off-the-shelf" entity recognizer. Our efforts also include a supervised approach, but we focus only on bootstrapping here.
\subsubsection{Bootstrapping Techniques for Entity Extraction}
Almost all semi-supervised techniques for entity extraction use a bootstrapping technique <|cite_start|> (Reference: A survey of named entity recognition and classification: This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress.) <|cite_end|>, and follow a similar overall cyclic structure. Given an entity type (such as ``president'' or ``vulnerability'' ) a bootstrapping algorithm requires a set of known entity names, a set of known patterns (this is the usually small training set referred to as ``seeds''), and a text corpus (usually large).
A \textit{pattern} is contextual information which gives evidence for identifying a segment of text as an instance of an entity.
For example, a pattern for identifying presidential names could be a proper noun directly followed by the words ``was inaugurated''.
Traditionally, bootstrapping searches the corpus for known patterns to produce candidate entity names, which are then scored so only the most trusted names are promoted to join the set of known entity names.
The corpus is also searched for instances of known entity names and candidate patterns are nominated from the observed context.
Candidate patterns are then scored to determine promotion.
This cycle may continue many times for a given corpus, as new patterns and entity names may be learned on each cycle <|cite_start|> (Reference: Toward never ending language learning: We report research toward a never-ending language learning system, focusing on a first implementation which learns to classify occurrences of noun phrases according to lexical categories such as “city” and “university.” Our experiments suggest that the accuracy of classifiers produced by semi-supervised learning can be improved by coupling the learning of multiple classes based on background knowledge about relationships between the classes (e.g., ”university” is mutually exclusive of ”company”, and is a subset of ”organization”).) <|cite_end|> <|cite_start|> (Reference: Extracting Patterns and Relations from the World Wide Web: ) <|cite_end|> <|cite_start|> (Reference: Coupled semi-supervised learning for information extraction: We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result.) <|cite_end|> <|cite_start|> (Reference: Bootstrapped Training of Event Extraction Classifiers: Most event extraction systems are trained with supervised learning and rely on a collection of annotated documents. Due to the domain-specificity of this task, event extraction systems must be retrained with new annotated data for each domain. In this paper, we propose a bootstrapping solution for event role filler extraction that requires minimal human supervision. We aim to rapidly train a state-of-the-art event extraction system using a small set of "seed nouns" for each event role, a collection of relevant (in-domain) and irrelevant (out-of-domain) texts, and a semantic dictionary. The experimental results show that the bootstrapped system outperforms previous weakly supervised event extraction systems on the MUC-4 data set, and achieves performance levels comparable to supervised training with 700 manually annotated documents.) <|cite_end|> <|cite_start|> (Reference: Learning to extract entities from labeled and unlabeled text: We describe and evaluate algorithms for learning to extract semantic classes from sentences in text documents, using the minimum of training information. The thesis of this research is that we can efficiently automate information extraction, that is, learn from tens of examples of labeled training data instead of requiring thousands, by exploiting redundancy and separability of the features noun-phrases and contexts. We exploit this redundancy and separability in two ways: (1) in the algorithms for learning semantic classes, and (2) in novel algorithms for active learning, leading to better extractors for a given amount of user labeling effort.) <|cite_end|>.
A diagram of the process is provided in Figure \ref{traditional} .
Although outside the scope of the current paper, it is commonplace for this algorithmic setup to be implemented for relation extraction, often simultaneously with entity extraction <|cite_start|> (Reference: Snowball: A prototype system for extracting relations from large text collections.: Text documents often hide valuable structured data. For example, a collection of newspaper articles might contain information on the location of the headquarters of a number of organizations. If we need to nd the location of the headquarters of, say, Microsoft, we could try and use traditional information-retrieval techniques for nding documents that contain the answer to our query. Alternatively, we could answer such a query more precisely if we somehow had available a table listing all the organization-location pairs that are mentioned in our document collection. One could view the extraction process as automatically building a materialized view over the unstructured text data. In this demo we present an interactive prototype of our Snowball system for extracting relations from collections of plain-text documents with minimal human participation. Our method builds on the DIPRE idea introduced by Brin [3]. Our system and techniques were presented in detail in [2] and [1].) <|cite_end|> <|cite_start|> (Reference: Toward never ending language learning: We report research toward a never-ending language learning system, focusing on a first implementation which learns to classify occurrences of noun phrases according to lexical categories such as “city” and “university.” Our experiments suggest that the accuracy of classifiers produced by semi-supervised learning can be improved by coupling the learning of multiple classes based on background knowledge about relationships between the classes (e.g., ”university” is mutually exclusive of ”company”, and is a subset of ”organization”).) <|cite_end|> <|cite_start|> (Reference: Extracting Patterns and Relations from the World Wide Web: ) <|cite_end|> <|cite_start|> (Reference: Coupled semi-supervised learning for information extraction: We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result.) <|cite_end|>.
\begin{figure}
\includegraphics[width=3.4in]{cmu_chart2.png}
\caption{A cycle in traditional bootstrapping involves 2 traversals through the corpus, one to nominate new patterns, one to nominate new entity names.}
\label{traditional}
\end{figure}
\begin{figure}
\includegraphics[width=3.4in]{pace_chart2.png}
\caption{A cycle in the new algorithm involves one traversal through the corpus for nominating (entity name, context) pairs. Storing entity names with their observed context facilitates more robust pattern selection without a second corpus search.}
\label{our_chart}
\end{figure}
While all previous bootstrapping algorithms for entity extraction follow the general workflow discussed above, variations in implementation details have yielded worthwhile results.
In <|cite_start|> (Reference: Coupled semi-supervised learning for information extraction: We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result.) <|cite_end|>, a predetermined ontology of entity and relation types is used to impose constraints on the learned instances that results in greater accuracy. Active learning can be incorporated by periodically requesting human feedback in order to omit spuriously learned patterns and entities, as such drifting is a common problem for bootstrapping techniques <|cite_start|> (Reference: Active learning for information extraction via bootstrapping: Text learning algorithms are reasonably successful when provided with enough labeled or annotated training examples. For instance, text classifiers [13, 10, 21, 4, 18] reach high accuracy from large sets of class-labeled documents; information extraction algorithms [3, 15, 19, 8] perform well when given many tagged documents or large sets of rules as input. However, creating these training sets becomes tedious and expensive, since typically they must be labeled by a person.) <|cite_end|>.
The advantage of bootstrapping is the minimal required labeled data, which facilitates its use in almost any domain.
On the other hand, the perennial Achilles' heel of bootstrapping, and more generally of any machine learning with minimal training data, is the acquisition of spurious results, causing extracted terms to drift from the desired entity type.
Traction is gained in the details of the scoring algorithms and pattern selection.
We note that the overall goal of many previous implementations of bootstrapping is to create a comprehensive list of names for a given type; for example, see <|cite_start|> (Reference: Toward never ending language learning: We report research toward a never-ending language learning system, focusing on a first implementation which learns to classify occurrences of noun phrases according to lexical categories such as “city” and “university.” Our experiments suggest that the accuracy of classifiers produced by semi-supervised learning can be improved by coupling the learning of multiple classes based on background knowledge about relationships between the classes (e.g., ”university” is mutually exclusive of ”company”, and is a subset of ”organization”).) <|cite_end|> <|cite_start|> (Reference: Extracting Patterns and Relations from the World Wide Web: ) <|cite_end|> <|cite_start|> (Reference: Coupled semi-supervised learning for information extraction: We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result.) <|cite_end|> <|cite_start|> (Reference: Bootstrapped Training of Event Extraction Classifiers: Most event extraction systems are trained with supervised learning and rely on a collection of annotated documents. Due to the domain-specificity of this task, event extraction systems must be retrained with new annotated data for each domain. In this paper, we propose a bootstrapping solution for event role filler extraction that requires minimal human supervision. We aim to rapidly train a state-of-the-art event extraction system using a small set of "seed nouns" for each event role, a collection of relevant (in-domain) and irrelevant (out-of-domain) texts, and a semantic dictionary. The experimental results show that the bootstrapped system outperforms previous weakly supervised event extraction systems on the MUC-4 data set, and achieves performance levels comparable to supervised training with 700 manually annotated documents.) <|cite_end|>.
Consequently, if a known entity name is overlooked in one document but found in another, the desired outcome is still accomplished. On the other hand, if concepts to be learned are dissimilar (e.g. including both ``president'' and ``athletic team'' as entity types), disparate text sources are necessary in the corpus (e.g. both sports news and historical documents), leading to semantic drift.
A large corpus is generally assumed, sometimes on the order of tens of millions of documents, and, in fact, relied upon. For example, in <|cite_start|> (Reference: Extracting Patterns and Relations from the World Wide Web: ) <|cite_end|> extremely stringent rules are imposed when nominating patterns so that only precise patterns are learned.
While this may reduce drift, recall suffers and the massive size of the corpus is needed for the system to learn anything. An additional limitation of such an approach is the computational and temporal cost incurred.
Brin's <|cite_start|> (Reference: Extracting Patterns and Relations from the World Wide Web: ) <|cite_end|> corpus included more than 24 million documents, on which no implementation of the algorithm was reported completed, and a smaller corpus of approximately 5 million documents took a few days to complete a cycle.
Considering the motivating example discussed in Section~\ref{example}, the current goal is ideally to identify each occurrence of a security entity in a document for timely discovery of new vulnerabilities and exploits and to store this information in a database;
hence, this problem is more of a labeling task instead of creating extensive lists of known entities.
This difference requires greater recall for each document.
While Section~\ref{challenges} discusses the many challenges imposed by the complexity of the entities in the security domain, working in exclusively one domain is an advantage; namely, considering only relevant documents will inhibit drift, which may be accomplished by using a decision classifier to discard irrelevant documents when populating the corpus, as in <|cite_start|> (Reference: Extracting Information about Security Vulnerabilities from Web Text: The Web is an important source of information about computer security threats, vulnerabilities and cyber attacks. We present initial work on developing a framework to detect and extract information about vulnerabilities and attacks from Web text. Our prototype system uses Wikitology, a general purpose knowledge base derived from Wikipedia, to extract concepts that describe specific vulnerabilities and attacks, map them to related concepts from DBpedia and generate machine understandable assertions. Such a framework will be useful in adding structure to already existing vulnerability descriptions as well as detecting new ones. We evaluate our approach against vulnerability descriptions from the National Vulnerability Database. Our results suggest that it can be useful in monitoring streams of text from social media or chat rooms to identify potential new attacks and vulnerabilities or to collect data on the spread and volume of existing ones.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Extracting Information about Security Vulnerabilities from Web Text: The Web is an important source of information about computer security threats, vulnerabilities and cyber attacks. We present initial work on developing a framework to detect and extract information about vulnerabilities and attacks from Web text. Our prototype system uses Wikitology, a general purpose knowledge base derived from Wikipedia, to extract concepts that describe specific vulnerabilities and attacks, map them to related concepts from DBpedia and generate machine understandable assertions. Such a framework will be useful in adding structure to already existing vulnerability descriptions as well as detecting new ones. We evaluate our approach against vulnerability descriptions from the National Vulnerability Database. Our results suggest that it can be useful in monitoring streams of text from social media or chat rooms to identify potential new attacks and vulnerabilities or to collect data on the spread and volume of existing ones. <|reference_end|>",
"<|reference_start|> Toward never ending language learning: We report research toward a never-ending language learning system, focusing on a first implementation which learns to classify occurrences of noun phrases according to lexical categories such as “city” and “university.” Our experiments suggest that the accuracy of classifiers produced by semi-supervised learning can be improved by coupling the learning of multiple classes based on background knowledge about relationships between the classes (e.g., ”university” is mutually exclusive of ”company”, and is a subset of ”organization”). <|reference_end|>",
"<|reference_start|> Extracting Patterns and Relations from the World Wide Web: <|reference_end|>",
"<|reference_start|> Coupled semi-supervised learning for information extraction: We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result. <|reference_end|>"
] | [
1,
10,
11,
17
] | {"<|cite_1|>": "ss-1935502", "<|cite_2|>": "ss-776271", "<|cite_3|>": "ss-776272", "<|cite_4|>": "ss-1935502", "<|multi_cite_5_1|>": "ss-1713805", "<|multi_cite_5_2|>": "ss-779859", "<|multi_cite_5_3|>": "ss-973844", "<|multi_cite_5_4|>": "ss-1927751", "<|multi_cite_5_5|>": "ss-1429959", "<|multi_cite_6_1|>": "ss-2181430", "<|multi_cite_6_2|>": "ss-1713805", "<|multi_cite_6_3|>": "ss-779859", "<|multi_cite_6_4|>": "ss-973844", "<|cite_7|>": "ss-973844", "<|cite_8|>": "ss-2183774", "<|multi_cite_9_1|>": "ss-1713805", "<|multi_cite_9_2|>": "ss-779859", "<|multi_cite_9_3|>": "ss-973844", "<|multi_cite_9_5|>": "ss-1927751", "<|cite_10|>": "ss-779859", "<|cite_11|>": "ss-779859", "<|cite_12|>": "ss-776271"} |
2202.05199 | <|paper_start|> Title: A Human-Centered Machine-Learning Approach for Muscle-Tendon Junction Tracking in Ultrasound Images
Abstract: A Human-Centered Machine-Learning Approach for Muscle-Tendon Junction Tracking in Ultrasound Images: Biomechanical and clinical gait research observes muscles and tendons in limbs to study their functions and behaviour. Therefore, movements of distinct anatomical landmarks, such as muscle-tendon junctions, are frequently measured. We propose a reliable and time efficient machine-learning approach to track these junctions in ultrasound videos and support clinical biomechanists in gait analysis. In order to facilitate this process, a method based on deep-learning was introduced. We gathered an extensive dataset, covering 3 functional movements, 2 muscles, collected on 123 healthy and 38 impaired subjects with 3 different ultrasound systems, and providing a total of 66864 annotated ultrasound images in our network training. Furthermore, we used data collected across independent laboratories and curated by researchers with varying levels of experience. For the evaluation of our method a diverse test-set was selected that is independently verified by four specialists. We show that our model achieves similar performance scores to the four human specialists in identifying the muscle-tendon junction position. Our method provides time-efficient tracking of muscle-tendon junctions, with prediction times of up to 0.078 seconds per frame (approx. 100 times faster than manual labeling). All our codes, trained models and test-set were made publicly available and our model is provided as a free-to-use online service on https://deepmtj.org/.
Introduction
\label{sec:introduction}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.35\textwidth, clip, trim={0 0 0 0}]{{./figures/MTJ_Location_v2}.pdf}
\vspace{0.5cm}
\caption{Three examples of the MTJ in the medial gastrocnemius (MG) muscle-tendon unit, recorded with three different instruments. The MTJ is indicated by a red cross. We specify contrast-to-noise ratios (CNR) for all considered instruments. The video frame in Figure \textbf{a} was collected with an Aixplorer V6 US system (Aixplorer). This figure also shows the embedding (yellow color) of the MTJ in the triceps surae muscle-tendon unit (MG, lateral gastrocnemius (LG, not shown), Soleus and Achilles Tendon (AT)). The white arrow in Figure \textbf{b} indicates the direction of principal movement of the MTJ from distal to proximal in the x,y-coordinate system. This video frame was collected with an Esaote MyLab 60 US system (Esaote). Figure \textbf{c} shows an image of the MTJ collected with a Telemed ArtUs US system (Telemed).}
\label{fig:MTJ}
\vspace{0.5cm}
\end{figure}
\input{tables/data_studies.tex}
\IEEEPARstart{D}{uring} human locomotion, muscle-tendon complexes of lower limbs are under cyclic concentric and eccentric stress <|cite_start|> (Reference: Stretch-shortening cycle: a powerful model to study normal and fatigued muscle.: ) <|cite_end|>. Within these units, muscles and tendons have different properties <|cite_start|> (Reference: Energy-saving mechanisms in walking and running: Energy can be saved in terrestrial locomotion in many different ways. The maximum shortening speeds (Vmax) of the muscles can be adjusted to their optimum values for the tasks required of them. The moments exerted by the muscles at different joints can be adjusted to keep the ground force in line with the leg so that muscles do not work against each other. The joints of the legs can be kept as straight as possible, minimizing muscle forces and work requirements. Walking gaits should be selected at low Froude numbers (a dimensionless speed parameter) and running gaits at high Froude numbers. Tendon and other springs can be used to store elastic strain energy and to return it by elastic recoil. This paper aims to show how these energy-saving mechanisms work and to what extent mammals exploit them. Arguments based on our rather limited knowledge of the relationship between the mechanical performance of muscle and its metabolic energy consumption are used throughout. They suggest that muscles that are optimally adapted for their tasks in running should do positive work with constant efficiency.) <|cite_end|>, contribute differently to external loading <|cite_start|> (Reference: Interactions between the human gastrocnemius muscle and the Achilles tendon during incline, level and decline locomotion: SUMMARY Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h-1) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h-1) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.) <|cite_end|> and adapt differently to stimuli <|cite_start|> (Reference: {Human tendon behaviour and adaptation, in vivo.: Tendon properties contribute to the complex interaction of the central nervous system, muscle–tendon unit and bony structures to produce joint movement. Until recently limited information on human tendon behaviour in vivo was available; however, novel methodological advancements have enabled new insights to be gained in this area. The present review summarizes the progress made with respect to human tendon and aponeurosis function in vivo, and how tendons adapt to ageing, loading and unloading conditions. During low tensile loading or with passive lengthening not only the muscle is elongated, but also the tendon undergoes significant length changes, which may have implications for reflex responses. During active loading, the length change of the tendon far exceeds that of the aponeurosis, indicating that the aponeurosis may more effectively transfer force onto the tendon, which lengthens and stores elastic energy subsequently released during unloading, in a spring‐like manner. In fact, data recently obtained in vivo confirm that, during walking, the human Achilles tendon provides elastic strain energy that can decrease the energy cost of locomotion. Also, new experimental evidence shows that, contrary to earlier beliefs, the metabolic activity in human tendon is remarkably high and this affords the tendon the ability to adapt to changing demands. With ageing and disuse there is a reduction in tendon stiffness, which can be mitigated with resistance exercises. Such adaptations seem advantageous for maintaining movement rapidity, reducing tendon stress and risk of injury, and possibly, for enabling muscles to operate closer to the optimum region of the length–tension relationship.) <|cite_end|>. For instance, imbalances in muscle strength or tendon stiffness may impede efficient interplay during locomotion <|cite_start|> (Reference: {Muscular force in running turkeys: the economy of minimizing work.: During running, muscles and tendons must absorb and release mechanical work to maintain the cyclic movements of the body and limbs, while also providing enough force to support the weight of the body. Direct measurements of force and fiber length in the lateral gastrocnemius muscle of running turkeys revealed that the stretch and recoil of tendon and muscle springs supply mechanical work while active muscle fibers produce high forces. During level running, the active muscle shortens little and performs little work but provides the force necessary to support body weight economically. Running economy is improved by muscles that act as active struts rather than working machines.) <|cite_end|> or lead to injuries <|cite_start|> (Reference: {Individualized Muscle-Tendon Assessment and Training: The interaction of muscle and tendon is of major importance for movement performance and a balanced development of muscle strength and tendon stiffness could protect athletes from overuse injury. However, muscle and tendon do not necessarily adapt in a uniform manner during a training process. The development of a diagnostic routine to assess both the strength capacity of muscle and the mechanical properties of tendons would enable the detection of muscle-tendon imbalances, indicate if the training should target muscle strength or tendon stiffness development and allow for the precise prescription of training loads to optimize tendon adaptation. This perspective article discusses a framework of individualized muscle-tendon assessment and training and outlines a methodological approach for the patellar tendon.) <|cite_end|>. In clinical populations, knowledge of alterations in muscles and tendons due to short or long-term treatments (e.g., physical therapy or surgeries) is crucial for developing efficient therapeutic strategies <|cite_start|> (Reference: {Impact of Altered Gastrocnemius Morphometrics and Fascicle Behavior on Walking Patterns in Children With Spastic Cerebral Palsy: Spastic cerebral palsy (SCP) affects neural control, deteriorates muscle morphometrics, and may progressively impair functional walking ability. Upon passive testing, gastrocnemius medialis (GM) muscle bellies or fascicles are typically shorter, thinner, and less extensible. Relationships between muscle and gait parameters might help to understand gait pathology and pathogenesis of spastic muscles. The current aim was to link resting and dynamic GM morphometrics and contractile fascicle behavior (both excursion and velocity) during walking to determinants of gait. We explored the associations between gait variables and ultrasonography of the GM muscle belly captured during rest and during gait in children with SCP [n = 15, gross motor function classification system (GMFCS) levels I and II, age: 7–16 years] and age-matched healthy peers (n = 17). The SCP children’s plantar flexors were 27% weaker. They walked 12% slower with more knee flexion produced 42% less peak ankle push-off power (all p < 0.05) and 7/15 landed on their forefoot. During the stance phase, fascicles in SCP on average operated on 9% shorter length (normalized to rest length) and displayed less and slower fascicle shortening (37 and 30.6%, respectively) during push-off (all p ≤ 0.024). Correlation analyses in SCP patients revealed that (1) longer-resting fascicles and thicker muscle bellies are positively correlated with walking speed and negatively to knee flexion (r = 0.60–0.69, p < 0.0127) but not to better ankle kinematics; (2) reduced muscle strength was associated with the extent of eccentric fascicle excursion (r = −0.57, p = 0.015); and (3) a shorter operating length of the fascicles was correlated with push-off power (r = −0.58, p = 0.013). Only in controls, a correlation (r = 0.61, p = 0.0054) between slower fascicle shortening velocity and push-off power was found. Our results indicate that a thicker gastrocnemius muscle belly and longer gastrocnemius muscle fascicles may be reasonable morphometric properties that should be targeted in interventions for individuals with SCP, since GM muscle atrophy may be related to decreases in walking speed and undesired knee flexion during gait. Furthermore, children with SCP and weaker gastrocnemius muscle may be more susceptible to chronic eccentric muscle overloading. The relationship between shorter operating length of the fascicles and push-off power may further support the idea of a compensation mechanism for the longer sarcomeres found in children with SCP. Nevertheless, more studies are needed to support our explorative findings.) <|cite_end|>.
To investigate tissue behaviour in lower limbs (e.g., the triceps surae muscle tendon unit) and to distinguish between individual contributions of muscles and tendons, their junctions are usually visualized using ultrasound (US) imaging (Fig. \ref{fig:MTJ}) while their displacements are tracked with various methods <|cite_start|> (Reference: {An algorithm for automated analysis of ultrasound images to measure tendon excursion in vivo: The accuracy of an algorithm for the automated tracking of tendon excursion from ultrasound images was tested in three experiments. Because the automated method could not be tested against direct measurements of tendon excursion in vivo, an indirect validation procedure was employed. In one experiment, a wire "phantom" was moved a known distance across the ultrasound probe and the automated tracking results were compared with the known distance. The excursion of the musculotendinous junction of the gastrocnemius during frontal and sagittal plane movement of the ankle was assessed in a single cadaver specimen both by manual tracking and with a cable extensometer sutured to the gastrocnemius muscle. A third experiment involved estimation of Achilles tendon excursion in vivo with both manual and automated tracking. Root mean squared (RMS) error was calculated between pairs of measurements after each test. Mean RMS errors of less than 1 mm were observed for the phantom experiments. For the in vitro experiment, mean RMS errors of 8-9% of the total tendon excursion were observed. Mean RMS errors of 6-8% of the total tendon excursion were found in vivo. The results indicate that the proposed algorithm accurately tracks Achilles tendon excursion, but further testing is necessary to determine its general applicability.) <|cite_end|> <|cite_start|> (Reference: {Automatic Myotendinous Junction Tracking in Ultrasound Images with Phase-Based Segmentation: Displacement of the myotendinous junction (MTJ) obtained by ultrasound imaging is crucial to quantify the interactive length changes of muscles and tendons for understanding the mechanics and pathological conditions of the muscle-tendon unit during motion. However, the lack of a reliable automatic measurement method restricts its application in human motion analysis. This paper presents an automated measurement of MTJ displacement using prior knowledge on tendinous tissues and MTJ, precluding the influence of nontendinous components on the estimation of MTJ displacement. It is based on the perception of tendinous features from musculoskeletal ultrasound images using Radon transform and thresholding methods, with information about the symmetric measures obtained from phase congruency. The displacement of MTJ is achieved by tracking manually marked points on tendinous tissues with the Lucas-Kanade optical flow algorithm applied over the segmented MTJ region. The performance of this method was evaluated on ultrasound images of the gastrocnemius obtained from 10 healthy subjects (26.0 ± 2.9 years of age). Waveform similarity between the manual and automatic measurements was assessed by calculating the overall similarity with the coefficient of multiple correlation (CMC). In vivo experiments demonstrated that MTJ tracking with the proposed method (CMC = 0.97 ± 0.02) was more consistent with the manual measurements than existing optical flow tracking methods (CMC = 0.79 ± 0.11). This study demonstrated that the proposed method was robust to the interference of nontendinous components, resulting in a more reliable measurement of MTJ displacement, which may facilitate further research and applications related to the architectural change of muscles and tendons.) <|cite_end|> <|cite_start|> (Reference: {Semi-automatic methods for tracking the medial gastrocnemius muscle–tendon junction using ultrasound: a validation study: What is the central question of this study? Is the proposed semi‐automatic algorithm suitable for tracking the medial gastrocnemius muscle–tendon junction in ultrasound images collected during passive and active conditions? What is the main finding and its importance? The validation of a method allowing efficient tracking of the muscle–tendon junction in both passive and active conditions, in healthy as well as in pathological conditions. This method was tested in common acquisition conditions and the developed software made freely available.) <|cite_end|> <|cite_start|> (Reference: Automatic Tracking of the Muscle Tendon Junction in Healthy and Impaired Subjects using Deep Learning: Recording muscle tendon junction displacements during movement, allows separate investigation of the muscle and tendon behaviour, respectively. In order to provide a fully-automatic tracking method, we employ a novel deep learning approach to detect the position of the muscle tendon junction in ultrasound images. We utilize the attention mechanism to enable the network to focus on relevant regions and to obtain a better interpretation of the results. Our data set consists of a large cohort of 79 healthy subjects and 28 subjects with movement limitations performing passive full range of motion and maximum contraction movements. Our trained network shows robust detection of the muscle tendon junction on a diverse data set of varying quality with a mean absolute error of 2.55$\pm$1 mm. We show that our approach can be applied for various subjects and can be operated in real-time. The complete software package is available for open-source use via: https://github.com/luuleitner/deepMTJ) <|cite_end|> <|cite_start|> (Reference: Automated analysis of medial gastrocnemius muscle-tendon junction displacements in heathy young adults during isolated contractions and walking using deep neural networks: ) <|cite_end|>. The triceps surae (Fig. \ref{fig:MTJ}. a) is a major contributor to human locomotion. It consists of three heads: the medial (MG) and lateral (LG) gastrocnemius as well as the soleus (SO) muscle. Each individual head is connected via a muscle-tendon junction (MTJ) to the Achilles tendon (AT). Thus, the MTJ provides a form and force-locked interconnection between contracting muscles and passively acting tendons <|cite_start|> (Reference: {The development of the myotendinous junction. A review: The myotendinous junction (MTJ) is a complex specialized region located at the muscle-tendon interface that represents the primary site of force transmission. Despite their different embryologic origins, muscle and tendon morphogenesis occurs in close spatial and temporal association. After muscle attachment, muscle and tendon constitute a dynamic and functional integrated unit that transduces muscle contraction force to the skeletal system. We review here the current understanding of MTJ formation describing changes during morphogenesis and focusing on the crosstalk between muscle and tendon cells that leads to the development of a functional MTJ. Molecules involved in the formation of the linkage, both at the tendon side and at the muscle side of the junction are described. Much of this knowledge comes from studies using different animal models such as mice, zebrafish and Drosophila where powerful methods for in vivo imaging and genetic manipulations can be used to enlighten this developmental process.) <|cite_end|>. In US images, the MTJ is clearly visible due to the change of acoustic impedance in muscles and tendons. Moreover, due to its definable maximum displacement and primary longitudinal (distal to proximal) travel direction (Fig. \ref{fig:MTJ}. b), the area of this anatomical feature is covered by standard-sized linear US arrays <|cite_start|> (Reference: {Ultrasound as a Tool to Study Muscle–Tendon Functions during Locomotion: A Systematic Review of Applications: Movement science investigating muscle and tendon functions during locomotion utilizes commercial ultrasound imagers built for medical applications. These limit biomechanics research due to their form factor, range of view, and spatio-temporal resolution. This review systematically investigates the technical aspects of applying ultrasound as a research tool to investigate human and animal locomotion. It provides an overview on the ultrasound systems used and of their operating parameters. We present measured fascicle velocities and discuss the results with respect to operating frame rates during recording. Furthermore, we derive why muscle and tendon functions should be recorded with a frame rate of at least 150 Hz and a range of view of 250 mm. Moreover, we analyze why and how the development of better ultrasound observation devices at the hierarchical level of muscles and tendons can support biomechanics research. Additionally, we present recent technological advances and their possible application. We provide a list of recommendations for the development of a more advanced ultrasound sensor system class targeting biomechanical applications. Looking to the future, mobile, ultrafast ultrasound hardware technologies create immense opportunities to expand the existing knowledge of human and animal movement.) <|cite_end|>. Therefore, this method is widely used and it improves general understanding of muscle-tendon properties and their behaviour in healthy <|cite_start|> (Reference: {Effect of training-induced changes in Achilles tendon stiffness on muscle-tendon behavior during landing: During rapid deceleration of the body, tendons buffer part of the elongation of the muscle–tendon unit (MTU), enabling safe energy dissipation via eccentric muscle contraction. Yet, the influence of changes in tendon stiffness within the physiological range upon these lengthening contractions is unknown. This study aimed to examine the effect of training-induced stiffening of the Achilles tendon on triceps surae muscle–tendon behavior during a landing task. Twenty-one male subjects were assigned to either a 10-week resistance-training program consisting of single-leg isometric plantarflexion (n = 11) or to a non-training control group (n = 10). Before and after the training period, plantarflexion force, peak Achilles tendon strain and stiffness were measured during isometric contractions, using a combination of dynamometry, ultrasound and kinematics data. Additionally, testing included a step-landing task, during which joint mechanics and lengths of gastrocnemius and soleus fascicles, Achilles tendon, and MTU were determined using synchronized ultrasound, kinematics and kinetics data collection. After training, plantarflexion strength and Achilles tendon stiffness increased (15 and 18%, respectively), and tendon strain during landing remained similar. Likewise, lengthening and negative work produced by the gastrocnemius MTU did not change detectably. However, in the training group, gastrocnemius fascicle length was offset (8%) to a longer length at touch down and, surprisingly, fascicle lengthening and velocity were reduced by 27 and 21%, respectively. These changes were not observed for soleus fascicles when accounting for variation in task execution between tests. These results indicate that a training-induced increase in tendon stiffness does not noticeably affect the buffering action of the tendon when the MTU is rapidly stretched. Reductions in gastrocnemius fascicle lengthening and lengthening velocity during landing occurred independently from tendon strain. Future studies are required to provide insight into the mechanisms underpinning these observations and their influence on energy dissipation.) <|cite_end|> and impaired subjects <|cite_start|> (Reference: {Medial gastrocnemius and soleus muscle-tendon unit, fascicle, and tendon interaction during walking in children with cerebral palsy: This study investigates the in vivo function of the medial gastrocnemius and soleus muscle‐tendon units (MTU), fascicles, and tendons during walking in children with cerebral palsy (CP) and an equinus gait pattern.) <|cite_end|> <|cite_start|> (Reference: {Semi-automatic methods for tracking the medial gastrocnemius muscle–tendon junction using ultrasound: a validation study: What is the central question of this study? Is the proposed semi‐automatic algorithm suitable for tracking the medial gastrocnemius muscle–tendon junction in ultrasound images collected during passive and active conditions? What is the main finding and its importance? The validation of a method allowing efficient tracking of the muscle–tendon junction in both passive and active conditions, in healthy as well as in pathological conditions. This method was tested in common acquisition conditions and the developed software made freely available.) <|cite_end|>.
However, musculoskeletal US imaging depends on operators <|cite_start|> (Reference: {Is musculoskeletal ultrasonography an operator-dependent method or a fast and reliably teachable diagnostic tool? Interreader agreements of three ultrasonographers with different training levels: Objectives. To assess interreader agreements and a learning curve between three (senior, junior, and beginner) different experienced musculoskeletal ultrasonographers. Senior served as the imaging “gold standard”. Methods. Clinically dominant joints (finger, shoulder, knee, tibiotalar, and talonavicular) of 15 rheumatoid arthritis (RA) patients were examined by three different experienced ultrasonographers (senior 10 years, junior 10 months, and beginner one month). Each patient's ultrasonographic findings were reported unaware of the other investigators' results. κ coefficients, percentage agreements, sensitivities, and specificities were calculated. Results. 120 joints of 15 RA patients were evaluated. Comparing junior's and beginner's results each to the senior's findings, the overall κ for all examined joints was 0.83 (93%) for junior and 0.43 (76%) for beginner. Regarding the different joints, junior's findings correlate very well with the senior's findings (finger joints: κ = 0.82; shoulder: κ = 0.9; knee: κ = 0.74; tibiotalar joint: κ = 0.84; talonavicular joint: κ = 0.84) while beginner's findings just showed fair to moderate agreements (finger joints: κ = 0.4; shoulder: κ = 0.42; knee: κ = 0.4; tibiotalar joint: κ = 0.59; talonavicular joint: κ = 0.35). In total, beginner's results clearly improved from κ = 0.34 (agreement of 67%) at baseline to κ = 0.78 (agreement of 89%) at the end of the evaluation period. Conclusions. Ultrasonographic evaluation of a ten-month-experienced investigator in comparison to a senior ultrasonographer was of substantial agreement. Agreements between a beginner and a highly experienced ultrasonographer were only fair at the beginning, but during the study including ultrasonographical sessions of 15 RA patients, the beginner clearly improved in musculoskeletal ultrasonography.) <|cite_end|>. In particular, image interpretation requires trained specialists. Moreover, investigating displacements of the MTJ in US images typically needs handcrafted labeling. For this reason, several semi-automatic and automatic methods to track MTJs have been proposed <|cite_start|> (Reference: {An algorithm for automated analysis of ultrasound images to measure tendon excursion in vivo: The accuracy of an algorithm for the automated tracking of tendon excursion from ultrasound images was tested in three experiments. Because the automated method could not be tested against direct measurements of tendon excursion in vivo, an indirect validation procedure was employed. In one experiment, a wire "phantom" was moved a known distance across the ultrasound probe and the automated tracking results were compared with the known distance. The excursion of the musculotendinous junction of the gastrocnemius during frontal and sagittal plane movement of the ankle was assessed in a single cadaver specimen both by manual tracking and with a cable extensometer sutured to the gastrocnemius muscle. A third experiment involved estimation of Achilles tendon excursion in vivo with both manual and automated tracking. Root mean squared (RMS) error was calculated between pairs of measurements after each test. Mean RMS errors of less than 1 mm were observed for the phantom experiments. For the in vitro experiment, mean RMS errors of 8-9% of the total tendon excursion were observed. Mean RMS errors of 6-8% of the total tendon excursion were found in vivo. The results indicate that the proposed algorithm accurately tracks Achilles tendon excursion, but further testing is necessary to determine its general applicability.) <|cite_end|> <|cite_start|> (Reference: {Automatic Myotendinous Junction Tracking in Ultrasound Images with Phase-Based Segmentation: Displacement of the myotendinous junction (MTJ) obtained by ultrasound imaging is crucial to quantify the interactive length changes of muscles and tendons for understanding the mechanics and pathological conditions of the muscle-tendon unit during motion. However, the lack of a reliable automatic measurement method restricts its application in human motion analysis. This paper presents an automated measurement of MTJ displacement using prior knowledge on tendinous tissues and MTJ, precluding the influence of nontendinous components on the estimation of MTJ displacement. It is based on the perception of tendinous features from musculoskeletal ultrasound images using Radon transform and thresholding methods, with information about the symmetric measures obtained from phase congruency. The displacement of MTJ is achieved by tracking manually marked points on tendinous tissues with the Lucas-Kanade optical flow algorithm applied over the segmented MTJ region. The performance of this method was evaluated on ultrasound images of the gastrocnemius obtained from 10 healthy subjects (26.0 ± 2.9 years of age). Waveform similarity between the manual and automatic measurements was assessed by calculating the overall similarity with the coefficient of multiple correlation (CMC). In vivo experiments demonstrated that MTJ tracking with the proposed method (CMC = 0.97 ± 0.02) was more consistent with the manual measurements than existing optical flow tracking methods (CMC = 0.79 ± 0.11). This study demonstrated that the proposed method was robust to the interference of nontendinous components, resulting in a more reliable measurement of MTJ displacement, which may facilitate further research and applications related to the architectural change of muscles and tendons.) <|cite_end|> <|cite_start|> (Reference: {Semi-automatic methods for tracking the medial gastrocnemius muscle–tendon junction using ultrasound: a validation study: What is the central question of this study? Is the proposed semi‐automatic algorithm suitable for tracking the medial gastrocnemius muscle–tendon junction in ultrasound images collected during passive and active conditions? What is the main finding and its importance? The validation of a method allowing efficient tracking of the muscle–tendon junction in both passive and active conditions, in healthy as well as in pathological conditions. This method was tested in common acquisition conditions and the developed software made freely available.) <|cite_end|> <|cite_start|> (Reference: Quantifying mechanical loading and elastic strain energy of the human Achilles tendon during walking and running: ) <|cite_end|>. Image analysis in biomechanical and clinical US studies relies largely on computer vision algorithms <|cite_start|> (Reference: Ultrasound imaging to assess skeletal muscle architecture during movements: a systematic review of methods, reliability, and challenges: BACKGROUND
B-mode ultrasound is often used to quantify muscle architecture during movements.
OBJECTIVES
1) Systematically review the reliability of fascicle length (FL) and pennation angles (PA) measured using ultrasound during movements involving voluntary contractions, 2) systematically review the methods used in studies reporting reliability, discuss associated challenges, and provide recommendations to improve the reliability and validity of dynamic ultrasound measurements, 3) provide an overview of computational approaches for quantifying fascicle architecture, their validity, agreement with manual quantification of fascicle architecture, and advantages and drawbacks.
METHODS
Three databases were searched until June 2019. Studies among healthy human individuals aged 17-85 years that investigated the reliability of FL or PA in lower extremity muscles during isoinertial movements and written in English were included.
RESULTS
Thirty studies (n=340 participants) were included for reliability analyses. Between-session reliability as measured by coefficient of multiple correlations (CMC) and coefficient of variation (CV) was FL CMC: 0.89-0.96; CV: 8.3%, and PA CMC: 0.87-0.90; CV: 4.5-9.6%. Within-session reliability was FL CMC: 0.82-0.99; CV: 0.0-6.7%, and PA CMC: 0.91; CV: 0.0-15.0%. Manual analysis reliability was FL CMC: 0.89-0.96; CV: 0.0-15.9%; PA CMC: 0.84-0.90; CV: 2.0-9.8%. Computational analysis FL CMC was 0.82-0.99 and PA CV was 14.0-15.0%. Eighteen computational approaches were identified and these generally showed high agreement with manual analysis and high validity compared to phantoms or synthetic images.
CONCLUSIONS
B-mode ultrasound is a reliable method to quantify fascicle architecture during movement. Additionally, computational approaches can provide a reliable and valid estimation of fascicle architecture.) <|cite_end|>. Applied on noisy, real-world US motion data, these optical-flow or matching based methods are prone to errors <|cite_start|> (Reference: Optical flow estimation using high frame rate sequences: Gradient-based optical flow estimation methods such as the Lucas-Kanade (1981) method work well for scenes with small displacements but fail when objects move with large displacements. Hierarchical matching-based methods do not suffer from large displacements but are less accurate. By utilizing the high speed imaging capability of CMOS image sensors, the frame rate can be increased to obtain more accurate optical flow with wide range of scene velocities in real time. Further, by integrating the memory and processing with the sensor on the same chip, optical flow estimation using high frame rate sequences can be performed without unduly increasing the off-chip data rate. The paper describes a method for obtaining high accuracy optical flow at a standard frame rate using high frame rate sequences. The Lucas-Kanade method is used to obtain optical flow estimates at high frame rate, which are then accumulated and refined to obtain optical flow estimates at a standard frame rate. The method is tested on video sequences synthetically generated by perspective warping. The results demonstrate significant improvements in optical flow estimation accuracy with moderate memory and computational power requirements.) <|cite_end|>. This is often due to low frame-rate recordings or poor image qualities of standard medical US systems <|cite_start|> (Reference: {Ultrasound as a Tool to Study Muscle–Tendon Functions during Locomotion: A Systematic Review of Applications: Movement science investigating muscle and tendon functions during locomotion utilizes commercial ultrasound imagers built for medical applications. These limit biomechanics research due to their form factor, range of view, and spatio-temporal resolution. This review systematically investigates the technical aspects of applying ultrasound as a research tool to investigate human and animal locomotion. It provides an overview on the ultrasound systems used and of their operating parameters. We present measured fascicle velocities and discuss the results with respect to operating frame rates during recording. Furthermore, we derive why muscle and tendon functions should be recorded with a frame rate of at least 150 Hz and a range of view of 250 mm. Moreover, we analyze why and how the development of better ultrasound observation devices at the hierarchical level of muscles and tendons can support biomechanics research. Additionally, we present recent technological advances and their possible application. We provide a list of recommendations for the development of a more advanced ultrasound sensor system class targeting biomechanical applications. Looking to the future, mobile, ultrafast ultrasound hardware technologies create immense opportunities to expand the existing knowledge of human and animal movement.) <|cite_end|>. One common parameter to quantify US image quality is the contrast-to-noise-ratio (CNR) <|cite_start|> (Reference: Resolution in ultrasound imaging: Ultrasound scanning is now utilized in all aspects of anaesthesia, critical care, and pain management. Typical applications include determination of left ventricular function and cardiac output, assessment of haemodynamic instability, assistance with difficult venous access, and facilitation of accurate neural block. – 3 One aspect of competency in ultrasound imaging includes an understanding of how images can be displayed optimally. This article discusses three main aspects of the physics of diagnostic ultrasound, that is to say, spatial resolution, temporal resolution, and contrast resolution; it utilizes examples from perioperative echocardiography to illustrate these principles.) <|cite_end|>.
Recently, machine learning solutions for automatic detection and tracking of musculoskeletal features in biomechanical applications have been developed <|cite_start|> (Reference: Fully automated analysis of muscle architecture from B-mode ultrasound images with deep learning: B-mode ultrasound is commonly used to image musculoskeletal tissues, but one major bottleneck is data interpretation, and analyses of muscle thickness, pennation angle and fascicle length are often still performed manually. In this study we trained deep neural networks (based on U-net) to detect muscle fascicles and aponeuroses using a set of labelled musculoskeletal ultrasound images. We then compared neural network predictions on new, unseen images to those obtained via manual analysis and two existing semi/automated analysis approaches (SMA and Ultratrack). With a GPU, inference time for a single image with the new approach was around 0.7s, compared to 4.6s with a CPU. Our method detects the locations of the superficial and deep aponeuroses, as well as multiple fascicle fragments per image. For single images, the method gave similar results to those produced by a non-trainable automated method (SMA; mean difference in fascicle length: 1.1 mm) or human manual analysis (mean difference: 2.1 mm). Between-method differences in pennation angle were within 1$^\circ$, and mean differences in muscle thickness were less than 0.2 mm. Similarly, for videos, there was strong overlap between the results produced with Ultratrack and our method, with a mean ICC of 0.73, despite the fact that the analysed trials included hundreds of frames. Our method is fully automated and open source, and can estimate fascicle length, pennation angle and muscle thickness from single images or videos, as well as from multiple superficial muscles. We also provide all necessary code and training data for custom model development.) <|cite_end|>. These methods improve performance because they can learn to extract salient features, such as anatomical landmarks, directly from annotated input images. Therefore, a neural network is trained to find a mapping between input images and manually set labels <|cite_start|> (Reference: Deep Learning: Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.) <|cite_end|> <|cite_start|> (Reference: Deep Learning: Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.) <|cite_end|>. With a sufficiently large dataset, neural networks can successfully map novel data (generalization). During training with real world images, the network learns to neglect noise or instrumental errors, yielding robust and accurate results compared with classical computer vision applications <|cite_start|> (Reference: {Improved Tracking of Muscle Tendon Junctions in Ultrasound Images Using Speckle Reduction: Ultrasound imaging enables in-vivo investigations of muscle and tendon behaviour during human movement. Individual contributions of muscles and tendons to the behaviour of the whole muscle-tendon unit during locomotion are versatile. Therefore, movements of distinct landmarks, such as muscle tendon junctions are recorded and tracked in order to investigate internal dynamics of the muscle-tendon complex. In this study, we use a semi-automatic tracking method based on image segmentation and investigate how tracking accuracy can be improved using a sticks filter. We demonstrate that a speckle reduction decreases the root-mean-square error of the tracking result by up to 78.1%, depending on the chosen window size of the sticks filter.) <|cite_end|>. Leitner \textit{et al.} <|cite_start|> (Reference: Automatic Tracking of the Muscle Tendon Junction in Healthy and Impaired Subjects using Deep Learning: Recording muscle tendon junction displacements during movement, allows separate investigation of the muscle and tendon behaviour, respectively. In order to provide a fully-automatic tracking method, we employ a novel deep learning approach to detect the position of the muscle tendon junction in ultrasound images. We utilize the attention mechanism to enable the network to focus on relevant regions and to obtain a better interpretation of the results. Our data set consists of a large cohort of 79 healthy subjects and 28 subjects with movement limitations performing passive full range of motion and maximum contraction movements. Our trained network shows robust detection of the muscle tendon junction on a diverse data set of varying quality with a mean absolute error of 2.55$\pm$1 mm. We show that our approach can be applied for various subjects and can be operated in real-time. The complete software package is available for open-source use via: https://github.com/luuleitner/deepMTJ) <|cite_end|> for example, used data from an Esaote US system (Esaote SpA, Genoa, Italy) and a ResNet model architecture <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|> with an attention mechanism <|cite_start|> (Reference: Learn To Pay Attention: We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must \textit{alone} be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.) <|cite_end|> to investigate MTJ predictions on 107 subjects using 7200 manually annotated labels. They found that an inclusion of healthy and impaired patients into the training dataset improved overall performance of their model. Krupenevich \textit{et al.} <|cite_start|> (Reference: Automated analysis of medial gastrocnemius muscle-tendon junction displacements in heathy young adults during isolated contractions and walking using deep neural networks: ) <|cite_end|> focused their work on the trackability of MTJs across several isometric movements and complex functional tasks such as walking. They trained a MobileNetV2 <|cite_start|> (Reference: MobileNetV2: Inverted Residuals and Linear Bottlenecks: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters) <|cite_end|> architecture on 1200 manually annotated ground truth labels, collected from 15 subjects that were walking, with a Telemed US system (Telemed UAB, Vilnius, Lithuania).
These newly emerging machine-learning applications for MTJ tracking show that deep neural networks provide strong performance in identifying the exact MTJ positions in US images, even for small training datasets, and independent of subjects and movements. However, there is still lack of evidence on how these algorithms perform on noisy inter-laboratory, inter-observer data and may be generalized to diverse settings. Furthermore, previous MTJ tracking neural network models were evaluated on inaccessible test-sets and labels of test-set and training dataset were identified by the same person. This neglects unavoidable positional variations of different observers and introduces potential bias. In particular, machine learning benchmarks need to include more than one clinical specialist to generate reliable reference test-set labels <|cite_start|> (Reference: Deep learning in spatiotemporal cardiac imaging: A review of methodologies and clinical usability: ) <|cite_end|> <|cite_start|> (Reference: Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization: Recently, we have witnessed great progress in the field of medical imaging classification by adopting deep neural networks. However, the recent advanced models still require accessing sufficiently large and representative datasets for training, which is often unfeasible in clinically realistic environments. When trained on limited datasets, the deep neural network is lack of generalization capability, as the trained deep neural network on data within a certain distribution (e.g. the data captured by a certain device vendor or patient population) may not be able to generalize to the data with another distribution. In this paper, we introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification. Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding with a novel linear-dependency regularization term to capture the shareable information among medical data collected from different domains. As a result, the trained neural network is expected to equip with better generalization capability to the "unseen" medical data. Experimental results on two challenging medical imaging classification tasks indicate that our method can achieve better cross-domain generalization capability compared with state-of-the-art baselines.) <|cite_end|> <|cite_start|> (Reference: Automated deep-neural-network surveillance of cranial images for acute neurologic events: ) <|cite_end|> with low noise <|cite_start|> (Reference: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis: Supervised training of deep learning models requires large labeled datasets. There is a growing interest in obtaining such datasets for medical image analysis applications. However, the impact of label noise has not received sufficient attention. Recent studies have shown that label noise can significantly impact the performance of deep learning models in many machine learning and computer vision applications. This is especially concerning for medical applications, where datasets are typically small, labeling requires domain expertise and suffers from high inter- and intra-observer variability, and erroneous predictions may influence decisions that directly impact human health. In this paper, we first review the state-of-the-art in handling label noise in deep learning. Then, we review studies that have dealt with label noise in deep learning for medical image analysis. Our review shows that recent progress on handling label noise in deep learning has gone largely unnoticed by the medical image analysis community. To help achieve a better understanding of the extent of the problem and its potential remedies, we conducted experiments with three medical imaging datasets with different types of label noise, where we investigated several existing strategies and developed new methods to combat the negative effect of label noise. Based on the results of these experiments and our review of the literature, we have made recommendations on methods that can be used to alleviate the effects of different types of label noise on deep models trained for medical image analysis. We hope that this article helps the medical image analysis researchers and developers in choosing and devising new techniques that effectively handle label noise in deep learning.) <|cite_end|>. Moreover, predictions across multiple-domains (e.g. data collected from different instruments) are key in generalizing machine learning algorithms <|cite_start|> (Reference: Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study: Background There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task. Methods and findings A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855–0.866) on the joint MSH–NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927–0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745–0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH–NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system–specific biases. Conclusion Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.) <|cite_end|>. For example, deep-learning has shown excellent performance if training and test-set data are drawn from the same underlying distribution. However, large domain shifts in data (e.g. using data from machines of different vendors) often cause significant performance impairments. In case of the proposed MTJ tracker by Leitner \textit{et al.} <|cite_start|> (Reference: Automatic Tracking of the Muscle Tendon Junction in Healthy and Impaired Subjects using Deep Learning: Recording muscle tendon junction displacements during movement, allows separate investigation of the muscle and tendon behaviour, respectively. In order to provide a fully-automatic tracking method, we employ a novel deep learning approach to detect the position of the muscle tendon junction in ultrasound images. We utilize the attention mechanism to enable the network to focus on relevant regions and to obtain a better interpretation of the results. Our data set consists of a large cohort of 79 healthy subjects and 28 subjects with movement limitations performing passive full range of motion and maximum contraction movements. Our trained network shows robust detection of the muscle tendon junction on a diverse data set of varying quality with a mean absolute error of 2.55$\pm$1 mm. We show that our approach can be applied for various subjects and can be operated in real-time. The complete software package is available for open-source use via: https://github.com/luuleitner/deepMTJ) <|cite_end|> and Krupenevich \textit{et al.} <|cite_start|> (Reference: Automated analysis of medial gastrocnemius muscle-tendon junction displacements in heathy young adults during isolated contractions and walking using deep neural networks: ) <|cite_end|>, evaluation and training dataset come from the same US instrument. Therefore, these networks might fail to provide similar performance on datasets obtained from other US machine types.
In this work, we present a novel deep-learning approach for the detection of MTJs in ultrasound images. We curate a large and diverse training dataset, in order to provide a universal MTJ detection method, independent of the used US instrument, movement, muscle region or noise coming from experimental setups (Sect. \ref{sec:methods.data}). We use a deep neural network with U-Net architecture and attention mechanism to predict the position of the MTJ as a probability density function (Sect. \ref{sec:methods.model}). An objective test-set was created and curated by four independent specialist to evaluate the average deviation of our model from specialist labels (Sect. \ref{sec:results}). In addition, we estimate the generalization to novel datasets and discuss the capabilities of our method (Sect. \ref{sec:generalization}).
\input{tables/data_labels.tex} <|paper_end|> | [
"<|reference_start|> Stretch-shortening cycle: a powerful model to study normal and fatigued muscle.: <|reference_end|>",
"<|reference_start|> {Individualized Muscle-Tendon Assessment and Training: The interaction of muscle and tendon is of major importance for movement performance and a balanced development of muscle strength and tendon stiffness could protect athletes from overuse injury. However, muscle and tendon do not necessarily adapt in a uniform manner during a training process. The development of a diagnostic routine to assess both the strength capacity of muscle and the mechanical properties of tendons would enable the detection of muscle-tendon imbalances, indicate if the training should target muscle strength or tendon stiffness development and allow for the precise prescription of training loads to optimize tendon adaptation. This perspective article discusses a framework of individualized muscle-tendon assessment and training and outlines a methodological approach for the patellar tendon. <|reference_end|>",
"<|reference_start|> {Medial gastrocnemius and soleus muscle-tendon unit, fascicle, and tendon interaction during walking in children with cerebral palsy: This study investigates the in vivo function of the medial gastrocnemius and soleus muscle‐tendon units (MTU), fascicles, and tendons during walking in children with cerebral palsy (CP) and an equinus gait pattern. <|reference_end|>",
"<|reference_start|> Optical flow estimation using high frame rate sequences: Gradient-based optical flow estimation methods such as the Lucas-Kanade (1981) method work well for scenes with small displacements but fail when objects move with large displacements. Hierarchical matching-based methods do not suffer from large displacements but are less accurate. By utilizing the high speed imaging capability of CMOS image sensors, the frame rate can be increased to obtain more accurate optical flow with wide range of scene velocities in real time. Further, by integrating the memory and processing with the sensor on the same chip, optical flow estimation using high frame rate sequences can be performed without unduly increasing the off-chip data rate. The paper describes a method for obtaining high accuracy optical flow at a standard frame rate using high frame rate sequences. The Lucas-Kanade method is used to obtain optical flow estimates at high frame rate, which are then accumulated and refined to obtain optical flow estimates at a standard frame rate. The method is tested on video sequences synthetically generated by perspective warping. The results demonstrate significant improvements in optical flow estimation accuracy with moderate memory and computational power requirements. <|reference_end|>"
] | [
0,
5,
15,
23
] | {"<|cite_1|>": "ss-1768032", "<|cite_2|>": "ss-1549487", "<|cite_3|>": "ss-1659376", "<|cite_4|>": "ss-1768033", "<|cite_5|>": "ss-1658175", "<|cite_6|>": "ss-1768034", "<|cite_7|>": "ss-1768035", "<|multi_cite_8_1|>": "ss-1768036", "<|multi_cite_8_2|>": "ss-1768037", "<|multi_cite_8_3|>": "ss-1768038", "<|multi_cite_8_4|>": "arxiv-263538", "<|multi_cite_8_5|>": "ss-1768039", "<|cite_9|>": "ss-1768040", "<|cite_10|>": "ss-704252", "<|cite_11|>": "ss-1768041", "<|multi_cite_12_1|>": "ss-704250", "<|multi_cite_12_2|>": "ss-1768038", "<|cite_13|>": "ss-1768042", "<|multi_cite_14_1|>": "ss-1768036", "<|multi_cite_14_2|>": "ss-1768037", "<|multi_cite_14_3|>": "ss-1768038", "<|multi_cite_14_4|>": "ss-1768043", "<|cite_15|>": "ss-1474637", "<|cite_16|>": "ss-1728769", "<|cite_17|>": "ss-704252", "<|cite_18|>": "ss-1178215", "<|cite_19|>": "arxiv-289137", "<|multi_cite_20_1|>": "arxiv-166644", "<|multi_cite_20_2|>": "arxiv-166644", "<|cite_21|>": "ss-1768044", "<|cite_22|>": "arxiv-263538", "<|cite_23|>": "arxiv-88870", "<|cite_24|>": "arxiv-154052", "<|cite_25|>": "ss-1768039", "<|cite_26|>": "arxiv-145365", "<|multi_cite_27_1|>": "ss-942173", "<|multi_cite_27_2|>": "arxiv-292244", "<|multi_cite_27_3|>": "ss-686109", "<|cite_28|>": "arxiv-238039", "<|cite_29|>": "ss-784150", "<|cite_30|>": "arxiv-263538", "<|cite_31|>": "ss-1768039"} |
2201.04039 | <|paper_start|> Title: MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing
Abstract: MobilePhys: Personalized Mobile Camera-Based Contactless Physiological Sensing: Camera-based contactless photoplethysmography refers to a set of popular techniques for contactless physiological measurement. The current state-of-the-art neural models are typically trained in a supervised manner using videos accompanied by gold standard physiological measurements. However, they often generalize poorly out-of-domain examples (i.e., videos that are unlike those in the training set). Personalizing models can help improve model generalizability, but many personalization techniques still require some gold standard data. To help alleviate this dependency, in this paper, we present a novel mobile sensing system called MobilePhys, the first mobile personalized remote physiological sensing system, that leverages both front and rear cameras on a smartphone to generate high-quality self-supervised labels for training personalized contactless camera-based PPG models. To evaluate the robustness of MobilePhys, we conducted a user study with 39 participants who completed a set of tasks under different mobile devices, lighting conditions/intensities, motion tasks, and skin types. Our results show that MobilePhys significantly outperforms the state-of-the-art on-device supervised training and few-shot adaptation methods. Through extensive user studies, we further examine how does MobilePhys perform in complex real-world settings. We envision that calibrated or personalized camera-based contactless PPG models generated from our proposed dual-camera mobile sensing system will open the door for numerous future applications such as smart mirrors, fitness and mobile health applications.
Introduction
One of the visions of ubiquitous computing is the ability for people to interact with computing using any device. Today, some of the most ubiquitously available sensors are RGB cameras. Camera-based contactless physiological sensing refers to a set of techniques that enable contactless measurement of cardio-pulmonary signals and their related vitals signs, such as heart rate, respiration rate and blood oxygen saturation. Unobtrusive physiological sensing technology could help advance the vision of ubiquitous computing in numerous contexts, but perhaps most directly in health, well-being and affective computing applications. Cardiac and respiratory processes change the appearance of the body in several ways.
Camera-based contactless photoplethysmography or remote photoplethysmography (rPPG) involves the measurement of very subtle changes in light reflected from the skin to capture the photoplethysmogram. When the light hits the skin, the amount that is absorbed is influenced by the current peripheral blood volume. Subtle motions caused by blood pumping around the body can also be measured using optical flow patterns to recover the ballistocardiogram (BCG) <|cite_start|> (Reference: Detecting pulse from head motions in video: The pulse may now be extracted from films using the colour fluctuations inside the skin as a result of blood flow, way to current studies. If you’ve ever seen a person flush, you’ll recognize a shift inside the shade of their face. Our method, however, uses a chance occurrence. Blood consumption has an impact on extra than simply skin tone. As an end result, the pinnacle shifts as nicely. Amplification of video can decorate motion that is too small to be visible with the bare eye. When our heart charge rises, all of us flow like bubbleheads, but the amplitude is plenty smaller. During every cardiac cycle, the left ventricle contracts and swiftly injects blood into the aorta. Approximately 12 milligrams of blood circulate from both of your carotid arteries on your brain in a 24-hour duration. As it flows thru the vascular system, this surge of blood places stress at the skull. The pressure of blood on the pinnacle equals the force of the pinnacle on movement because of Newton’s 1/3 regulation, ensuing in a cyclical, reactionary head motion. This feature was used to expand a way for identifying pulses in traditional head films. We constitute the head motions in the movie the use of one-dimensional alerts rather than -dimensional visuals. We’ll be able to establish an average pulse fee and pinpoint precise beat areas for further scientific trying out using this information.) <|cite_end|>. The resulting pulse waveforms (PPG and BCG) can be used to derive heart rate and heart rate variability <|cite_start|> (Reference: Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam: We present a simple, low-cost method for measuring multiple physiological parameters using a basic webcam. By applying independent component analysis on the color channels in video recordings, we extracted the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability (HRV, an index for cardiac autonomic activity) were subsequently quantified and compared to corresponding measurements using Food and Drug Administration-approved sensors. High degrees of agreement were achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine.) <|cite_end|>. Based on natural sinus rhythm, the pulse signal can also be used to estimate the respiration rate <|cite_start|> (Reference: Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam: We present a simple, low-cost method for measuring multiple physiological parameters using a basic webcam. By applying independent component analysis on the color channels in video recordings, we extracted the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability (HRV, an index for cardiac autonomic activity) were subsequently quantified and compared to corresponding measurements using Food and Drug Administration-approved sensors. High degrees of agreement were achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine.) <|cite_end|>. However, pulmonary signals are often more obvious based on the motion of the torso due to the physical motion of inhaling and exhaling. A combination of mechanical and optical information typically would provide the richest signal. In this paper, we focus on camera-based contactless PPG measurement.
The COVID-19 pandemic acutely highlighted the importance and utility of camera-based contactless physiological sensing technology <|cite_start|> (Reference: The role of telemedicine during the COVID-19 epidemic in China—experience from Shandong province: ) <|cite_end|> <|cite_start|> (Reference: Telehealth for global emergencies: Implications for coronavirus disease 2019 (COVID-19): The current coronavirus (COVID-19) pandemic is again reminding us of the importance of using telehealth to deliver care, especially as means of reducing the risk of cross-contamination caused by close contact. For telehealth to be effective as part of an emergency response it first needs to become a routinely used part of our health system. Hence, it is time to step back and ask why telehealth is not mainstreamed. In this article, we highlight key requirements for this to occur. Strategies to ensure that telehealth is used regularly in acute, post-acute and emergency situations, alongside conventional service delivery methods, include flexible funding arrangements, training and accrediting our health workforce. Telehealth uptake also requires a significant change in management effort and the redesign of existing models of care. Implementing telehealth proactively rather than reactively is more likely to generate greater benefits in the long-term, and help with the everyday (and emergency) challenges in healthcare.) <|cite_end|>. The desire to protect healthcare workers and patients and reduce the need for people to travel illustrates how ubiquitous sensing could be used at scale. However, most people still do not have a way to measure the necessary signals at home. In this paper, we contribute towards this goal by proposing a system to easily personalize contactless PPG measurement models using a smartphone. These customized and personalized models produced from our system enable any device equipped with an RGB camera (e.g., a smart mirror, fitness equipment, such as Peloton or Mirror or even possibly a baby monitor), to provide comfortable, in-situ vital monitoring. Compared to traditional pulse oximeters using contact PPG techniques, camera-based contactless physiological sensing also provides a unique advantage to reduce the risk of infection for vulnerable patients and discomfort caused by obstructive wires <|cite_start|> (Reference: Non-contact physiological monitoring of preterm infants in the Neonatal Intensive Care Unit: ) <|cite_end|>.
Although camera-based contactless physiological sensing comes with many advantages, it also presents various technical challenges. First, there is still an accuracy gap between contact sensors and camera-based contactless solutions. The US Federal Drug Administration (FDA) requires a new device for cardiac monitoring to have substantial equivalence in accuracy with the FDA-approved devices. Unfortunately, none of the camera-based contactless systems has reached the bar of requirements of FDA-approved cardiac devices. Second, current camera-based contactless physiological sensing systems are especially sensitive to numerous noises such as lighting conditions and motions from different activities. Prior research has shown that the accuracy of these systems is significantly reduced while introducing such noises <|cite_start|> (Reference: DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks: Non-contact video-based physiological measurement has many applications in health care and human-computer interaction. Practical applications require measurements to be accurate even in the presence of large head rotations. We propose the first end-to-end system for video-based measurement of heart and breathing rate using a deep convolutional network. The system features a new motion representation based on a skin reflection model and a new attention mechanism using appearance information to guide motion estimation, both of which enable robust measurement under heterogeneous lighting and major motions. Our approach significantly outperforms all current state-of-the-art methods on both RGB and infrared video datasets. Furthermore, it allows spatial-temporal distributions of physiological signals to be visualized via the attention mechanism.) <|cite_end|> <|cite_start|> (Reference: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement: Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices. These tools can help reduce the risk of exposing patients and medical staff to infection, make healthcare services more accessible, and allow providers to see more patients. However, objective measurement of vital signs is challenging without direct contact with a patient. We present a video-based and on-device optical cardiopulmonary vital sign measurement approach. It leverages a novel multi-task temporal shift convolutional attention network (MTTS-CAN) and enables real-time cardiovascular and respiratory measurements on mobile platforms. We evaluate our system on an Advanced RISC Machine (ARM) CPU and achieve state-of-the-art accuracy while running at over 150 frames per second which enables real-time applications. Systematic experimentation on large benchmark datasets reveals that our approach leads to substantial (20%-50%) reductions in error and generalizes well across datasets.) <|cite_end|>. Third, there is a large individual difference in appearance (e.g., gender, skin type, makeup, hair) and physiology (e.g., blood volume dynamics). Creating a generalizable system for these conditions presents an interesting challenge.
One way to solve the aforementioned limitations is to train a supervised model with a large and diverse set of training data that contains samples that exhibit the types of variations expected at test time (e.g., race, lighting, motion). However, collecting such a large and high-quality physiological dataset is challenging. Although the process of data collection requires significant resources for recruitment and management, it is also risky to disclose the sensitive identity and physiological information of participants. Hence, traditional supervised training using a large-scale dataset is laborious and difficult for building an unbiased and generalizable camera-based contactless physiological sensing system.
In traditional clinical settings, physicians often use high-end medical devices to help calibrate customer-level medical sensors for each patient. The procedure of calibration helps combat individual differences in sensor performance and strengthens the validity of the output. Therefore, training a personalized model for each individual in different environments is ideal. However, getting high-quality synchronized video and ground-truth physiological signals for training a personalized model is difficult. This is especially complicated if patients want to calibrate with their smartphones' cameras because external medical sensors are barely compatible with smartphones. A mobile system that performs self-calibration in camera-based contactless physiological sensing is attractive.
Meta learning is an emerging technique in machine learning that aims to learn how to learn a task faster <|cite_start|> (Reference: Meta-Learning in Neural Networks: A Survey: The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple learning episodes. This paradigm provides an opportunity to tackle many conventional challenges of deep learning, including data and computation bottlenecks, as well as generalization. This survey describes the contemporary meta-learning landscape. We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning and hyperparameter optimization. We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods today. We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning. Finally, we discuss outstanding challenges and promising areas for future research.) <|cite_end|>. The goal of meta learning is to learn a quick learner for a new task (e.g., person). However, most meta learning tasks assume that ground-truth labels are available in the dataset during training which indeed is not the case for most applications. Because of the recent advancement of mobile sensors in smartphones, smartwatches, and IoT devices, mobile sensing systems now have the ability to provide some high-fidelity sensor data, even ground-truth labels for some applications. We believe the interplay between meta learning and mobile sensing systems has been underused in many domains.
In this work, we use contactless PPG measurement as an example to demonstrate how novel sensing systems can provide reliable pseudo physiological labels to meta learning algorithms for training few-shots adaption models. We propose a self-calibrating meta-learning system called \projectname, which leverages both front and rear cameras available on smartphones and the ability for us to measure physiological signals from multiple parts of the body. Specifically, we design a system that simultaneously measures the PPG signal from the finger-tip and the face of the subject during a calibration period to personalize a contactless measurement model that only relies on analyzing the face. The pseudo PPG labels generated from \projectname using the rear camera and the index finger can provide similar waveform quality to ground-truth PPG signals from medical-grade pulse oximeters. We demonstrate that this is also reliable in challenging real-world conditions (e.g., motion, lighting and darker skin types). Models customized or personalized using \projectname could then be deployed on the phone or shared with other smart devices (such as a laptop or smart mirror <|cite_start|> (Reference: A medical mirror for non-contact health monitoring: Digital medical devices promise to transform the future of medicine because of their ability to produce exquisitely detailed individual physiological data. As ordinary people start to have access and control over their own physiological data, they can play a more active role in the management of their health. This revolution must take place in our everyday lives, not just in the doctor's office or research lab. However, current techniques for physiological monitoring typically require users to strap on bulky sensors, chest straps or sticky electrodes. This discourages regular use because the sensors can be uncomfortable or encumbering. In this work, we propose a new mirror interface for real-time, contact-free measurements of heart rate without the need for external sensors. Users can have the experience of remote health monitoring by simply looking into the Medical Mirror.) <|cite_end|>) to enable convenient contactless measurement.
In summary, we propose a novel smartphone-based personalized physiological sensing system that leverages the back and front RGB camera to perform self-adaptation. More specifically, our contributions include:
\begin{itemize}
\item Proposing a novel mobile dual camera-based contactless physiological sensing system that generates high-quality pseudo PPG labels for few-shot personalization and adaptation.
\item Demonstrating that we can leverage contact finger-tip PPG signal derived from the smartphone's rear camera to train a personalized camera-based contactless physiological neural network.
\item Exploring and evaluating the performance of \projectname under different conditions such as different mobile devices, lighting conditions, motions, activities, skin types, and camera settings through comprehensive user studies.
\item Studying and investigating mobile camera settings, which we believe will be valuable for guiding future research in mobile physiological sensing.
\item Finally, we collected and will release the first-ever multi-modality mobile camera-based contactless physiological dataset with different mobile devices, lighting conditions, motions, activities, and skin types. The documented dataset has gold standard oximeter recordings and synchronized finger PPG signals from the rear camera, and face videos along with other sensor signals (e.g., IMU, ambient light, etc.) from the smartphone. The dataset comprises close to six hours of video and sensor data from 39 participants.
\end{itemize}
Related Work
\subsection{Mobile and Camera-based Contactless Physiological Sensing}
Thanks to their ubiquity and portability, smartphones are becoming a popular tool for monitoring activity and health parameters. Smartphones cameras can be used to effectively measure the PPG <|cite_start|> (Reference: {The Current State of Mobile Phone Apps for Monitoring Heart Rate, Heart Rate Variability, and Atrial Fibrillation: Narrative Review: Background Mobile phone apps capable of monitoring arrhythmias and heart rate (HR) are increasingly used for screening, diagnosis, and monitoring of HR and rhythm disorders such as atrial fibrillation (AF). These apps involve either the use of (1) photoplethysmographic recording or (2) a handheld external electrocardiographic recording device attached to the mobile phone or wristband. Objective This review seeks to explore the current state of mobile phone apps in cardiac rhythmology while highlighting shortcomings for further research. Methods We conducted a narrative review of the use of mobile phone devices by searching PubMed and EMBASE from their inception to October 2018. Potentially relevant papers were then compared against a checklist for relevance and reviewed independently for inclusion, with focus on 4 allocated topics of (1) mobile phone monitoring, (2) AF, (3) HR, and (4) HR variability (HRV). Results The findings of this narrative review suggest that there is a role for mobile phone apps in the diagnosis, monitoring, and screening for arrhythmias and HR. Photoplethysmography and handheld electrocardiograph recorders are the 2 main techniques adopted in monitoring HR, HRV, and AF. Conclusions A number of studies have demonstrated high accuracy of a number of different mobile devices for the detection of AF. However, further studies are warranted to validate their use for large scale AF screening.) <|cite_end|> using imaging or camera-based contactless PPG <|cite_start|> (Reference: {A Survey of Remote Optical Photoplethysmographic Imaging Methods: In recent years researchers have presented a number of new methods for recovering physiological parameters using just low-cost digital cameras and image processing. The ubiquity of digital cameras presents the possibility for many new, low-cost applications of vital sign monitoring. In this paper we present a review of the work on remote photoplethysmographic (PPG) imaging using digital cameras. This review specifically focuses on the state-of-the-art in PPG imaging where: 1) measures beyond pulse rate are evaluated, 2) non-ideal conditions (e.g., the presence of motion artifacts) are explored, and 3) use cases in relevant environments are demonstrated. We discuss gaps within the literature and future challenges for the research community. To aid in the continuing advancement of PPG imaging research, we are making available a website with the references collected for this review as well as information on available code and datasets of interest. It is our hope that this website will become a valuable resource for the PPG imaging community. The site can be found at: http://web.mit.edu/~djmcduff/www/ remote-physiology.html.) <|cite_end|>. There are two primary forms this can take: 1) contact measurement in which the subject places the camera against their skin (usually by placing their finger-tip over the camera lens) and the flash is optionally used to illuminate their skin; 2) remote measurement in which the subject faces the camera and a region of skin (usually on the face) is segmented and analyzed using ambient illumination. Contact measurement is typically the most robust and PPG and oxygen saturation (SpO$_{2}$) measurements <|cite_start|> (Reference: Physiological parameter monitoring from optical recordings with a mobile phone: We show that a mobile phone can serve as an accurate monitor for several physiological variables, based on its ability to record and analyze the varying color signals of a fingertip placed in contact with its optical sensor. We confirm the accuracy of measurements of breathing rate, cardiac R-R intervals, and blood oxygen saturation, by comparisons to standard methods for making such measurements (respiration belts, ECGs, and pulse-oximeters, respectively). Measurement of respiratory rate uses a previously reported algorithm developed for use with a pulse-oximeter, based on amplitude and frequency modulation sequences within the light signal. We note that this technology can also be used with recently developed algorithms for detection of atrial fibrillation or blood loss.) <|cite_end|> are well established. Research is continuing into how these techniques can be used to measure blood pressure <|cite_start|> (Reference: Blood pressure measurements with the OptiBP smartphone app validated against reference auscultatory measurements: ) <|cite_end|>.
However, contact measurement is often not convenient. Camera-based contactless measurement can be used for opportunistic measurement when people unlock their smartphone using the front camera. Moreover, due to the simplicity and portability of ubiquitous cameras, camera-based contactless measurement also has a great potential to provide scalable and accessible health sensing. While there any attractive properties of non-contact systems including comfort, scalability, and convenience, it still has numerous challenges involved in accurately measuring physiological signals such as PPG signal, as the distance between the camera and the region of interests adds greater lighting variations, possibilities for objects to occlude the skin and/or there be motions of the ROI relative to the camera.
Imager or camera-based contactless physiological sensing can be performed using handcrafted signal processing methods or supervised learning. Fundamentally, these approaches are based on optical models that offer a way to model the interaction between ambient illumination and skin. The Lambert-Beer law (LBL) and Shafer’s dichromatic reflection Model (DRM) have been employed as inspiration for these algorithms. Thanks to their simplicity, signal processing-based methods were the first to be used in pulse measurement <|cite_start|> (Reference: Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam: We present a simple, low-cost method for measuring multiple physiological parameters using a basic webcam. By applying independent component analysis on the color channels in video recordings, we extracted the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability (HRV, an index for cardiac autonomic activity) were subsequently quantified and compared to corresponding measurements using Food and Drug Administration-approved sensors. High degrees of agreement were achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine.) <|cite_end|> <|cite_start|> (Reference: Robust pulse rate from chrominance-based rPPG: Remote photoplethysmography (rPPG) enables contactless monitoring of the blood volume pulse using a regular camera. Recent research focused on improved motion robustness, but the proposed blind source separation techniques (BSS) in RGB color space show limited success. We present an analysis of the motion problem, from which far superior chrominance-based methods emerge. For a population of 117 stationary subjects, we show our methods to perform in 92% good agreement (±1.96σ) with contact PPG, with RMSE and standard deviation both a factor of 2 better than BSS-based methods. In a fitness setting using a simple spectral peak detector, the obtained pulse-rate for modest motion (bike) improves from 79% to 98% correct, and for vigorous motion (stepping) from less than 11% to more than 48% correct. We expect the greatly improved robustness to considerably widen the application scope of the technology.) <|cite_end|> <|cite_start|> (Reference: Algorithmic principles of remote PPG: This paper introduces a mathematical model that incorporates the pertinent optical and physiological properties of skin reflections with the objective to increase our understanding of the algorithmic principles behind remote photoplethysmography (rPPG). The model is used to explain the different choices that were made in existing rPPG methods for pulse extraction. The understanding that comes from the model can be used to design robust or application-specific rPPG solutions. We illustrate this by designing an alternative rPPG method, where a projection plane orthogonal to the skin tone is used for pulse extraction. A large benchmark on the various discussed rPPG methods shows that their relative merits can indeed be understood from the proposed model.) <|cite_end|> and respiration measurement <|cite_start|> (Reference: {Non-contact Video-based Vital Sign Monitoring Using Ambient Light and Auto-regressive Models: Remote sensing of the reflectance photoplethysmogram using a video camera typically positioned 1 m away from the patient’s face is a promising method for monitoring the vital signs of patients without attaching any electrodes or sensors to them. Most of the papers in the literature on non-contact vital sign monitoring report results on human volunteers in controlled environments. We have been able to obtain estimates of heart rate and respiratory rate and preliminary results on changes in oxygen saturation from double-monitored patients undergoing haemodialysis in the Oxford Kidney Unit. To achieve this, we have devised a novel method of cancelling out aliased frequency components caused by artificial light flicker, using auto-regressive (AR) modelling and pole cancellation. Secondly, we have been able to construct accurate maps of the spatial distribution of heart rate and respiratory rate information from the coefficients of the AR model. In stable sections with minimal patient motion, the mean absolute error between the camera-derived estimate of heart rate and the reference value from a pulse oximeter is similar to the mean absolute error between two pulse oximeter measurements at different sites (finger and earlobe). The activities of daily living affect the respiratory rate, but the camera-derived estimates of this parameter are at least as accurate as those derived from a thoracic expansion sensor (chest belt). During a period of obstructive sleep apnoea, we tracked changes in oxygen saturation using the ratio of normalized reflectance changes in two colour channels (red and blue), but this required calibration against the reference data from a pulse oximeter.) <|cite_end|>. These pipelines primarily use color space conversions and signal decomposition. Early work <|cite_start|> (Reference: Remote heart rate measurement from face videos under realistic situations: Heart rate is an important indicator of people's physiological state. Recently, several papers reported methods to measure heart rate remotely from face videos. Those methods work well on stationary subjects under well controlled conditions, but their performance significantly degrades if the videos are recorded under more challenging conditions, specifically when subjects' motions and illumination variations are involved. We propose a framework which utilizes face tracking and Normalized Least Mean Square adaptive filtering methods to counter their influences. We test our framework on a large difficult and public database MAHNOB-HCI and demonstrate that our method substantially outperforms all previous methods. We also use our method for long term heart rate monitoring in a game evaluation scenario and achieve promising results.) <|cite_end|> <|cite_start|> (Reference: Remote plethysmographic imaging using ambient light.: Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.) <|cite_end|> only used green channel data as the PPG signal strength is typically strongest in the corresponding frequency bands. Subsequent research established that combining multiple color channels leads to a solution that is more robust to variation in environment (e.g., ambient lighting) and motion <|cite_start|> (Reference: Improved motion robustness of remote-PPG by using the blood volume pulse signature: Remote photoplethysmography (rPPG) enables contact-free monitoring of the blood volume pulse using a color camera. Essentially, it detects the minute optical absorption changes caused by blood volume variations in the skin. In this paper, we show that the different absorption spectra of arterial blood and bloodless skin cause the variations to occur along a very specific vector in a normalized RGB-space. The exact vector can be determined for a given light spectrum and for given transfer characteristics of the optical filters in the camera. We show that this ‘signature’ can be used to design an rPPG algorithm with a much better motion robustness than the recent methods based on blind source separation, and even better than the chrominance-based methods we published earlier. Using six videos recorded in a gym, with four subjects exercising on a range of fitness devices, we confirm the superior motion robustness of our newly proposed rPPG methods. A simple peak detector in the frequency domain returns the correct pulse-rate for 68% of total measurements compared to 60% for the best previous method, while the SNR of the pulse-signal improves from − 5 dB to − 4 dB. For a large population of 117 stationary subjects we prove that the accuracy is comparable to the best previous method, although the SNR of the pulse-signal drops from + 8.4 dB to + 7.6 dB. We expect the improved motion robustness to significantly widen the application scope of the rPPG-technique.) <|cite_end|> <|cite_start|> (Reference: Robust pulse rate from chrominance-based rPPG: Remote photoplethysmography (rPPG) enables contactless monitoring of the blood volume pulse using a regular camera. Recent research focused on improved motion robustness, but the proposed blind source separation techniques (BSS) in RGB color space show limited success. We present an analysis of the motion problem, from which far superior chrominance-based methods emerge. For a population of 117 stationary subjects, we show our methods to perform in 92% good agreement (±1.96σ) with contact PPG, with RMSE and standard deviation both a factor of 2 better than BSS-based methods. In a fitness setting using a simple spectral peak detector, the obtained pulse-rate for modest motion (bike) improves from 79% to 98% correct, and for vigorous motion (stepping) from less than 11% to more than 48% correct. We expect the greatly improved robustness to considerably widen the application scope of the technology.) <|cite_end|>. Most of the previous work employed Principal Component Analysis (PCA) <|cite_start|> (Reference: Exploiting spatial redundancy of image sensor for motion robust rPPG: Remote photoplethysmography (rPPG) techniques can measure cardiac activity by detecting pulse-induced color variations on human skin using an RGB camera. State-of-the-art rPPG methods are sensitive to subject body motions (e.g., motion-induced color distortions). This study proposes a novel framework to improve the motion robustness of rPPG. The basic idea of this paper originates from the observation that a camera can simultaneously sample multiple skin regions in parallel, and each of them can be treated as an independent sensor for pulse measurement. The spatial redundancy of an image sensor can thus be exploited to distinguish the pulse signal from motion-induced noise. To this end, the pixel-based rPPG sensors are constructed to estimate a robust pulse signal using motion-compensated pixel-to-pixel pulse extraction, spatial pruning, and temporal filtering. The evaluation of this strategy is not based on a full clinical trial, but on 36 challenging benchmark videos consisting of subjects that differ in gender, skin types, and performed motion categories. Experimental results show that the proposed method improves the SNR of the state-of-the-art rPPG technique from 3.34 to 6.76 dB, and the agreement (±1.96σ) with instantaneous reference pulse rate from 55% to 80% correct. ANOVA with post hoc comparison shows that the improvement on motion robustness is significant. The rPPG method developed in this study has a performance that is very close to that of the contact-based sensor under realistic situations, while its computational efficiency allows real-time processing on an off-the-shelf computer.) <|cite_end|> or Independent Component Analysis (ICA) <|cite_start|> (Reference: Non-contact, automated cardiac pulse measurements using video imaging and blind source separation.: : Remote measurements of the cardiac pulse can provide comfortable physiological assessment without electrodes. However, attempts so far are non-automated, susceptible to motion artifacts and typically expensive. In this paper, we introduce a new methodology that overcomes these problems. This novel approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into independent components. Using Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to an FDA-approved finger blood volume pulse (BVP) sensor and achieved high accuracy and correlation even in the presence of movement artifacts. Furthermore, we applied this technique to perform heart rate measurements from three participants simultaneously. This is the first demonstration of a low-cost accurate video-based method for contact-free heart rate measurements that is automated, motion-tolerant and capable of performing concomitant measurements on more than one person at a time.) <|cite_end|> <|cite_start|> (Reference: Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam: We present a simple, low-cost method for measuring multiple physiological parameters using a basic webcam. By applying independent component analysis on the color channels in video recordings, we extracted the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability (HRV, an index for cardiac autonomic activity) were subsequently quantified and compared to corresponding measurements using Food and Drug Administration-approved sensors. High degrees of agreement were achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine.) <|cite_end|> for signal decomposition. However, these approaches are susceptible to noise from head movements and variations in lighting conditions. More advanced approaches solved this issue by taking advantage of patients' skin characteristics knowledge <|cite_start|> (Reference: Robust pulse rate from chrominance-based rPPG: Remote photoplethysmography (rPPG) enables contactless monitoring of the blood volume pulse using a regular camera. Recent research focused on improved motion robustness, but the proposed blind source separation techniques (BSS) in RGB color space show limited success. We present an analysis of the motion problem, from which far superior chrominance-based methods emerge. For a population of 117 stationary subjects, we show our methods to perform in 92% good agreement (±1.96σ) with contact PPG, with RMSE and standard deviation both a factor of 2 better than BSS-based methods. In a fitness setting using a simple spectral peak detector, the obtained pulse-rate for modest motion (bike) improves from 79% to 98% correct, and for vigorous motion (stepping) from less than 11% to more than 48% correct. We expect the greatly improved robustness to considerably widen the application scope of the technology.) <|cite_end|> <|cite_start|> (Reference: Algorithmic principles of remote PPG: This paper introduces a mathematical model that incorporates the pertinent optical and physiological properties of skin reflections with the objective to increase our understanding of the algorithmic principles behind remote photoplethysmography (rPPG). The model is used to explain the different choices that were made in existing rPPG methods for pulse extraction. The understanding that comes from the model can be used to design robust or application-specific rPPG solutions. We illustrate this by designing an alternative rPPG method, where a projection plane orthogonal to the skin tone is used for pulse extraction. A large benchmark on the various discussed rPPG methods shows that their relative merits can indeed be understood from the proposed model.) <|cite_end|> giving somewhat more robust results. However, it is still difficult for these handcrafted signal processing pipelines to successfully separate the physiological signals from other pixel variations, many of which may be much larger than the subtle changes resulting from physiological processes.
Supervised learning methods often achieve superior performance compared to unsupervised signal processing approaches. These methods are able to capture highly non-linear relationships between the physiological signal and facial videos. DeepPhys <|cite_start|> (Reference: DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks: Non-contact video-based physiological measurement has many applications in health care and human-computer interaction. Practical applications require measurements to be accurate even in the presence of large head rotations. We propose the first end-to-end system for video-based measurement of heart and breathing rate using a deep convolutional network. The system features a new motion representation based on a skin reflection model and a new attention mechanism using appearance information to guide motion estimation, both of which enable robust measurement under heterogeneous lighting and major motions. Our approach significantly outperforms all current state-of-the-art methods on both RGB and infrared video datasets. Furthermore, it allows spatial-temporal distributions of physiological signals to be visualized via the attention mechanism.) <|cite_end|> was the first end-to-end neural approach for camera-based contactless physiological measurement. The model learns a soft-attention mask learning appearance information related to the physiological signals. The attention mechanism helps reduce noise by adapting the region of interest (ROI). Subsequent research has succeeded in leveraging neural networks for BVP or respiration measurement <|cite_start|> (Reference: Visual heart rate estimation with convolutional neural network: We propose a novel two-step convolutional neural network to estimate a heart rate from a sequence of facial images. The network is trained end-to-end by alternating optimization and validated on three publicly available datasets yielding state-of-the-art results against three baseline methods. The network performs better by a 40% margin to the state-of-the-art method on a newly collected dataset. A challenging dataset of 204 fitness-themed videos is introduced. The dataset is designed to test the robustness of heart rate estimation methods to illumination changes and subject’s motion. 17 subjects perform 4 activities (talking, rowing, exercising on a stationary bike and an elliptical trainer) in 3 lighting setups. Each activity is captured by two RGB web-cameras, one is placed on a tripod, the other is attached to the fitness machine which vibrates significantly. Subject’s age ranges from 20 to 53 years, the mean heart rate is ≈ 110, the standard deviation ≈ 25.) <|cite_end|> <|cite_start|> (Reference: Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks: Recent studies demonstrated that the average heart rate (HR) can be measured from facial videos based on non-contact remote photoplethysmography (rPPG). However for many medical applications (e.g., atrial fibrillation (AF) detection) knowing only the average HR is not sufficient, and measuring precise rPPG signals from face for heart rate variability (HRV) analysis is needed. Here we propose an rPPG measurement method, which is the first work to use deep spatio-temporal networks for reconstructing precise rPPG signals from raw facial videos. With the constraint of trend-consistency with ground truth pulse curves, our method is able to recover rPPG signals with accurate pulse peaks. Comprehensive experiments are conducted on two benchmark datasets, and results demonstrate that our method can achieve superior performance on both HR and HRV levels comparing to the state-of-the-art methods. We also achieve promising results of using reconstructed rPPG signals for AF detection and emotion recognition.) <|cite_end|> <|cite_start|> (Reference: Heart rate estimation from facial videos using a spatiotemporal representation with convolutional neural networks: Remote photoplethysmography (rPPG) is a kind of noncontact technique to measure heart rate (HR) from facial videos. As the demand for long-term health monitoring grows, rPPG attracts much attention from researchers. However, the performance of conventional rPPG methods is easily degenerated due to noise interference. Recently, some deep learning-based rPPG methods have been introduced and they revealed good performance against noise. In this article, we propose a new rPPG method with convolutional neural networks (CNNs) to build a mapping between a spatiotemporal HR feature image to its corresponding HR value. The feature map is constructed in a time-delayed way with noise-contaminated pulse signals extracted from existing rPPG methods. The CNN model is trained using transfer learning where images built from synthetic rPPG signals are taken to train the model first in order to generate initials for the practical one. The synthetic rPPG signals are interpolated from blood volume pulses or electrocardiograms through a modified Akima cubic Hermite interpolation. The proposed method is tested in both within-database and cross-database configurations on public databases. The results demonstrate that our method achieves overall the best performance compared to some other typical rPPG methods. The mean absolute error reaches 5.98 beats per minute and the mean error rate percentage is 7.97% in the cross-database testing on MAHNOB-HCI data set. Besides, some key factors that affect the performance of our method are also discussed which indicates potential ways for further improvements.) <|cite_end|> <|cite_start|> (Reference: RhythmNet: End-to-end Heart Rate Estimation from Face via Spatial-temporal Representation: Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HR, available at 'http://vipl.ict.ac.cn/view_database.php?id=15'), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases.) <|cite_end|>. For instance, RhythmNet <|cite_start|> (Reference: RhythmNet: End-to-end Heart Rate Estimation from Face via Spatial-temporal Representation: Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HR, available at 'http://vipl.ict.ac.cn/view_database.php?id=15'), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases.) <|cite_end|> computes spatial-temporal maps of the ROI facial areas to represent the HR signal passed to the succeeding neural network. Song et al. <|cite_start|> (Reference: Heart rate estimation from facial videos using a spatiotemporal representation with convolutional neural networks: Remote photoplethysmography (rPPG) is a kind of noncontact technique to measure heart rate (HR) from facial videos. As the demand for long-term health monitoring grows, rPPG attracts much attention from researchers. However, the performance of conventional rPPG methods is easily degenerated due to noise interference. Recently, some deep learning-based rPPG methods have been introduced and they revealed good performance against noise. In this article, we propose a new rPPG method with convolutional neural networks (CNNs) to build a mapping between a spatiotemporal HR feature image to its corresponding HR value. The feature map is constructed in a time-delayed way with noise-contaminated pulse signals extracted from existing rPPG methods. The CNN model is trained using transfer learning where images built from synthetic rPPG signals are taken to train the model first in order to generate initials for the practical one. The synthetic rPPG signals are interpolated from blood volume pulses or electrocardiograms through a modified Akima cubic Hermite interpolation. The proposed method is tested in both within-database and cross-database configurations on public databases. The results demonstrate that our method achieves overall the best performance compared to some other typical rPPG methods. The mean absolute error reaches 5.98 beats per minute and the mean error rate percentage is 7.97% in the cross-database testing on MAHNOB-HCI data set. Besides, some key factors that affect the performance of our method are also discussed which indicates potential ways for further improvements.) <|cite_end|> proposed transfer-learning a convolutional neural network (CNN). The model is first trained using spatio-temporal images generated using synthetic contactless PPG signals, and then real videos are used to refine the model. However, one drawback of these methods is the computational overhead of resulting networks. Given the ubiquity of RGB cameras on smartphones, ideally, a solution would run on these types of devices. Multi-task temporal shift attention network was proposed as one solution to enable on-device camera-based contactless cardiopulmonary monitoring on a smartphone <|cite_start|> (Reference: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement: Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices. These tools can help reduce the risk of exposing patients and medical staff to infection, make healthcare services more accessible, and allow providers to see more patients. However, objective measurement of vital signs is challenging without direct contact with a patient. We present a video-based and on-device optical cardiopulmonary vital sign measurement approach. It leverages a novel multi-task temporal shift convolutional attention network (MTTS-CAN) and enables real-time cardiovascular and respiratory measurements on mobile platforms. We evaluate our system on an Advanced RISC Machine (ARM) CPU and achieve state-of-the-art accuracy while running at over 150 frames per second which enables real-time applications. Systematic experimentation on large benchmark datasets reveals that our approach leads to substantial (20%-50%) reductions in error and generalizes well across datasets.) <|cite_end|> and reached an inference rate of over 150 frames per second.
However, neural networks still face various challenges. Individual differences in appearance (e.g., skin type, with glasses or not, pulse dynamics), environmental variations (e.g., light intensity, spectral composition) and motion (e.g., talking, head movement) make it hard to train an algorithm that generalizes well to unseen data. Camera settings and hardware sensitivity also vary significantly among devices, and prior research has shown that video compression also affects physiological measurement result <|cite_start|> (Reference: The Impact of Video Compression on Remote Cardiac Pulse Measurement Using Imaging Photoplethysmography: Remote physiological measurement has great potential in healthcare and affective computing applications. Imaging photoplethysmography (iPPG) leverages digital cameras to recover the blood volume pulse from the human body. While the impact of video parameters such as resolution and frame rate on iPPG accuracy have been studied, there has not been a systematic analysis of video compression algorithms. We compared a set of commonly used video compression algorithms (x264 and x265) and varied the Constant Rate Factor (CRF) to measure pulse rate recovery for a range of bit rates (file sizes) and video qualities. We found that compression, even at a low CRF, degrades the blood volume pulse (BVP) signal-tonoise ratio considerably. However, the bit rate of a video can be substantially decreased (by a factor of over 1000) without destroying the BVP signal entirely. We found an approximately linear relationship between bit rate and BVP signal-to-noise ratio up to a CRF of 36. A faster decrease in SNR was observed for videos of the task involving larger head motions and the x265 algorithm appeared to work more effectively in these cases.) <|cite_end|>. As a result, supervised models often perform significantly worse on cross-dataset evaluation than within-dataset evaluation.
\subsection{Meta-Learning and Personalized Physiological Sensing}
Learning from a small number of samples or observations is a hallmark of an intelligent agent. However, traditional machine learning systems do not perform well under such constraints. Meta-learning approaches tackle this problem, creating a general learner that is able to adapt to a new task with a small number of samples <|cite_start|> (Reference: Meta-Learning in Neural Networks: A Survey: The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple learning episodes. This paradigm provides an opportunity to tackle many conventional challenges of deep learning, including data and computation bottlenecks, as well as generalization. This survey describes the contemporary meta-learning landscape. We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning and hyperparameter optimization. We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods today. We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning. Finally, we discuss outstanding challenges and promising areas for future research.) <|cite_end|>. Previous work in meta-learning had focused on supervised computer vision tasks <|cite_start|> (Reference: Learning Transferable Architectures for Scalable Image Recognition: Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.) <|cite_end|> <|cite_start|> (Reference: Prototypical Networks for Few-shot Learning: We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.) <|cite_end|> and applied these methods to image analysis <|cite_start|> (Reference: Matching Networks for One Shot Learning: Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.) <|cite_end|> <|cite_start|> (Reference: Meta-SGD: Learning to Learn Quickly for Few-Shot Learning: Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.) <|cite_end|>. In the video domain, meta-learning has been successfully applied in object and face tracking <|cite_start|> (Reference: Deep Meta Learning for Real-Time Target-Aware Visual Tracking: In this paper, we propose a novel on-line visual tracking framework based on the Siamese matching network and meta-learner network, which run at real-time speeds. Conventional deep convolutional feature-based discriminative visual tracking algorithms require continuous re-training of classifiers or correlation filters, which involve solving complex optimization tasks to adapt to the new appearance of a target object. To alleviate this complex process, our proposed algorithm incorporates and utilizes a meta-learner network to provide the matching network with new appearance information of the target objects by adding target-aware feature space. The parameters for the target-specific feature space are provided instantly from a single forward-pass of the meta-learner network. By eliminating the necessity of continuously solving complex optimization tasks in the course of tracking, experimental results demonstrate that our algorithm performs at a real-time speed while maintaining competitive performance among other state-of-the-art tracking algorithms.) <|cite_end|> <|cite_start|> (Reference: Meta-Tracker: Fast and Robust Online Adaptation for Visual Object Trackers: This paper improves state-of-the-art visual object trackers that use online adaptation. Our core contribution is an offline meta-learning-based method to adjust the initial deep networks used in online adaptation-based tracking. The meta learning is driven by the goal of deep networks that can quickly be adapted to robustly model a particular target in future frames. Ideally the resulting models focus on features that are useful for future frames, and avoid overfitting to background clutter, small parts of the target, or noise. By enforcing a small number of update iterations during meta-learning, the resulting networks train significantly faster. We demonstrate this approach on top of the high performance tracking approaches: tracking-by-detection based MDNet and the correlation based CREST. Experimental results on standard benchmarks, OTB2015 and VOT2016, show that our meta-learned versions of both trackers improve speed, accuracy, and robustness.) <|cite_end|>. In these tasks, the learner needs to adapt to the individual differences in the appearance of the target and then track it across frames, even if the appearance changes considerably over time in the video. Choi et al. <|cite_start|> (Reference: Deep Meta Learning for Real-Time Target-Aware Visual Tracking: In this paper, we propose a novel on-line visual tracking framework based on the Siamese matching network and meta-learner network, which run at real-time speeds. Conventional deep convolutional feature-based discriminative visual tracking algorithms require continuous re-training of classifiers or correlation filters, which involve solving complex optimization tasks to adapt to the new appearance of a target object. To alleviate this complex process, our proposed algorithm incorporates and utilizes a meta-learner network to provide the matching network with new appearance information of the target objects by adding target-aware feature space. The parameters for the target-specific feature space are provided instantly from a single forward-pass of the meta-learner network. By eliminating the necessity of continuously solving complex optimization tasks in the course of tracking, experimental results demonstrate that our algorithm performs at a real-time speed while maintaining competitive performance among other state-of-the-art tracking algorithms.) <|cite_end|> present a matching network architecture providing the meta-learner with information in the form of loss gradients obtained using the training samples.
\begin{table}[ht]
\begin{center}
\caption{Comparison of State-of-the-Art Methods in Camera-Based Contactless Physiological Sensing}
\begin{tabular}{c c c c}
\toprule
\textbf{Method} & \textbf{On-Device} & \textbf{Adaptation} & \textbf{Reliable Pseudo Label} \\
\toprule
TS-CAN <|cite_start|> (Reference: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement: Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices. These tools can help reduce the risk of exposing patients and medical staff to infection, make healthcare services more accessible, and allow providers to see more patients. However, objective measurement of vital signs is challenging without direct contact with a patient. We present a video-based and on-device optical cardiopulmonary vital sign measurement approach. It leverages a novel multi-task temporal shift convolutional attention network (MTTS-CAN) and enables real-time cardiovascular and respiratory measurements on mobile platforms. We evaluate our system on an Advanced RISC Machine (ARM) CPU and achieve state-of-the-art accuracy while running at over 150 frames per second which enables real-time applications. Systematic experimentation on large benchmark datasets reveals that our approach leads to substantial (20%-50%) reductions in error and generalizes well across datasets.) <|cite_end|> & \cmark & \xmark & \xmark \\
Meta-rPPG <|cite_start|> (Reference: Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner: Remote heart rate estimation is the measurement of heart rate without any physical contact with the subject and is accomplished using remote photoplethysmography (rPPG) in this work. rPPG signals are usually collected using a video camera with a limitation of being sensitive to multiple contributing factors, e.g. variation in skin tone, lighting condition and facial structure. End-to-end supervised learning approach performs well when training data is abundant, covering a distribution that doesn't deviate too much from the distribution of testing data or during deployment. To cope with the unforeseeable distributional changes during deployment, we propose a transductive meta-learner that takes unlabeled samples during testing (deployment) for a self-supervised weight adjustment (also known as transductive inference), providing fast adaptation to the distributional changes. Using this approach, we achieve state-of-the-art performance on MAHNOB-HCI and UBFC-rPPG.) <|cite_end|> & \xmark & \cmark & \xmark \\
MetaPhys <|cite_start|> (Reference: MetaPhys: Few-Shot Adaptation for Non-Contact Physiological Measurement: There are large individual differences in physiological processes, making designing personalized health sensing algorithms challenging. Existing machine learning systems struggle to generalize well to unseen subjects or contexts and can often contain problematic biases. Video-based physiological measurement is not an exception. Therefore, learning personalized or customized models from a small number of unlabeled samples is very attractive as it would allow fast calibrations to improve generalization and help correct biases. In this paper, we present a novel meta-learning approach called MetaPhys for personalized video-based cardiac measurement for contactless pulse and heart rate monitoring. Our method uses only 18-seconds of video for customization and works effectively in both supervised and unsupervised manners. We evaluate our proposed approach on two benchmark datasets and demonstrate superior performance in cross-dataset evaluation with substantial reductions (42% to 44%) in errors compared with state-of-the-art approaches. We have also demonstrated our proposed method significantly helps reduce the bias in skin type.) <|cite_end|> & \cmark & \cmark & \xmark \\
MobilePhys (Ours) & \cmark & \cmark & \cmark \\
\hline
\bottomrule
\end{tabular}
\label{table:related_work}
\end{center}
\end{table}
As deep learning methods struggle to generalize to unseen tasks and data, developing a personalized physiological sensing model using only a few unlabeled samples is promising. Encouraged by success on other tasks, we leverage meta-learning as the way of adapting our camera-based contactless PPG sensing algorithms.
This work builds upon two specific examples of meta-learning applied to PPG measurement. Meta-rPPG <|cite_start|> (Reference: Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner: Remote heart rate estimation is the measurement of heart rate without any physical contact with the subject and is accomplished using remote photoplethysmography (rPPG) in this work. rPPG signals are usually collected using a video camera with a limitation of being sensitive to multiple contributing factors, e.g. variation in skin tone, lighting condition and facial structure. End-to-end supervised learning approach performs well when training data is abundant, covering a distribution that doesn't deviate too much from the distribution of testing data or during deployment. To cope with the unforeseeable distributional changes during deployment, we propose a transductive meta-learner that takes unlabeled samples during testing (deployment) for a self-supervised weight adjustment (also known as transductive inference), providing fast adaptation to the distributional changes. Using this approach, we achieve state-of-the-art performance on MAHNOB-HCI and UBFC-rPPG.) <|cite_end|> first introduced meta-learning for heart rate estimation. It achieves self-supervised weight adjustment by generating synthetic gradient and minimizing prototypical distance. MetaPhys <|cite_start|> (Reference: MetaPhys: Few-Shot Adaptation for Non-Contact Physiological Measurement: There are large individual differences in physiological processes, making designing personalized health sensing algorithms challenging. Existing machine learning systems struggle to generalize well to unseen subjects or contexts and can often contain problematic biases. Video-based physiological measurement is not an exception. Therefore, learning personalized or customized models from a small number of unlabeled samples is very attractive as it would allow fast calibrations to improve generalization and help correct biases. In this paper, we present a novel meta-learning approach called MetaPhys for personalized video-based cardiac measurement for contactless pulse and heart rate monitoring. Our method uses only 18-seconds of video for customization and works effectively in both supervised and unsupervised manners. We evaluate our proposed approach on two benchmark datasets and demonstrate superior performance in cross-dataset evaluation with substantial reductions (42% to 44%) in errors compared with state-of-the-art approaches. We have also demonstrated our proposed method significantly helps reduce the bias in skin type.) <|cite_end|> was then proposed is based on Model-Agnostic Meta-Learning (MAML) <|cite_start|> (Reference: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks: We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.) <|cite_end|>. It took advantage of the advanced on-device network architecture <|cite_start|> (Reference: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement: Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices. These tools can help reduce the risk of exposing patients and medical staff to infection, make healthcare services more accessible, and allow providers to see more patients. However, objective measurement of vital signs is challenging without direct contact with a patient. We present a video-based and on-device optical cardiopulmonary vital sign measurement approach. It leverages a novel multi-task temporal shift convolutional attention network (MTTS-CAN) and enables real-time cardiovascular and respiratory measurements on mobile platforms. We evaluate our system on an Advanced RISC Machine (ARM) CPU and achieve state-of-the-art accuracy while running at over 150 frames per second which enables real-time applications. Systematic experimentation on large benchmark datasets reveals that our approach leads to substantial (20%-50%) reductions in error and generalizes well across datasets.) <|cite_end|> and probed into both supervised and unsupervised training regimes, both of which yielded satisfactory results. For supervised learning, ground-truth signal comes from medical-grade contact sensors, while in the unsupervised version, pseudo labels are used instead in the meta-learner training process. Though effective, these prior works rely much on synchronized video and ground truth obtained from medical-grade devices. However, it is difficult and laborious to collect a large-scale physiological dataset. In this work, we propose a mobile sensing system that leverages both front and rear cameras to generate contact PPG labels and personalize a camera-based contactless physiological system to address this issue. We summarize the difference between popular recently published neural methods in camera-based contactless physiological sensing in Table \ref{table:related_work}. Since our goal is to develop an on-device mobile personalization system for camera-based contactless physiological sensing, \projectname has clear benefits over the state-of-the-art methods. It is also the only system that can help generate reliable pseudo labels under various contexts (e.g., motion, lighting, skin types). <|paper_end|> | [
"<|reference_start|> A medical mirror for non-contact health monitoring: Digital medical devices promise to transform the future of medicine because of their ability to produce exquisitely detailed individual physiological data. As ordinary people start to have access and control over their own physiological data, they can play a more active role in the management of their health. This revolution must take place in our everyday lives, not just in the doctor's office or research lab. However, current techniques for physiological monitoring typically require users to strap on bulky sensors, chest straps or sticky electrodes. This discourages regular use because the sensors can be uncomfortable or encumbering. In this work, we propose a new mirror interface for real-time, contact-free measurements of heart rate without the need for external sensors. Users can have the experience of remote health monitoring by simply looking into the Medical Mirror. <|reference_end|>",
"<|reference_start|> Robust pulse rate from chrominance-based rPPG: Remote photoplethysmography (rPPG) enables contactless monitoring of the blood volume pulse using a regular camera. Recent research focused on improved motion robustness, but the proposed blind source separation techniques (BSS) in RGB color space show limited success. We present an analysis of the motion problem, from which far superior chrominance-based methods emerge. For a population of 117 stationary subjects, we show our methods to perform in 92% good agreement (±1.96σ) with contact PPG, with RMSE and standard deviation both a factor of 2 better than BSS-based methods. In a fitness setting using a simple spectral peak detector, the obtained pulse-rate for modest motion (bike) improves from 79% to 98% correct, and for vigorous motion (stepping) from less than 11% to more than 48% correct. We expect the greatly improved robustness to considerably widen the application scope of the technology. <|reference_end|>",
"<|reference_start|> Robust pulse rate from chrominance-based rPPG: Remote photoplethysmography (rPPG) enables contactless monitoring of the blood volume pulse using a regular camera. Recent research focused on improved motion robustness, but the proposed blind source separation techniques (BSS) in RGB color space show limited success. We present an analysis of the motion problem, from which far superior chrominance-based methods emerge. For a population of 117 stationary subjects, we show our methods to perform in 92% good agreement (±1.96σ) with contact PPG, with RMSE and standard deviation both a factor of 2 better than BSS-based methods. In a fitness setting using a simple spectral peak detector, the obtained pulse-rate for modest motion (bike) improves from 79% to 98% correct, and for vigorous motion (stepping) from less than 11% to more than 48% correct. We expect the greatly improved robustness to considerably widen the application scope of the technology. <|reference_end|>",
"<|reference_start|> Heart rate estimation from facial videos using a spatiotemporal representation with convolutional neural networks: Remote photoplethysmography (rPPG) is a kind of noncontact technique to measure heart rate (HR) from facial videos. As the demand for long-term health monitoring grows, rPPG attracts much attention from researchers. However, the performance of conventional rPPG methods is easily degenerated due to noise interference. Recently, some deep learning-based rPPG methods have been introduced and they revealed good performance against noise. In this article, we propose a new rPPG method with convolutional neural networks (CNNs) to build a mapping between a spatiotemporal HR feature image to its corresponding HR value. The feature map is constructed in a time-delayed way with noise-contaminated pulse signals extracted from existing rPPG methods. The CNN model is trained using transfer learning where images built from synthetic rPPG signals are taken to train the model first in order to generate initials for the practical one. The synthetic rPPG signals are interpolated from blood volume pulses or electrocardiograms through a modified Akima cubic Hermite interpolation. The proposed method is tested in both within-database and cross-database configurations on public databases. The results demonstrate that our method achieves overall the best performance compared to some other typical rPPG methods. The mean absolute error reaches 5.98 beats per minute and the mean error rate percentage is 7.97% in the cross-database testing on MAHNOB-HCI data set. Besides, some key factors that affect the performance of our method are also discussed which indicates potential ways for further improvements. <|reference_end|>"
] | [
9,
15,
21,
30
] | {"<|cite_1|>": "ss-1134257", "<|cite_2|>": "ss-1353913", "<|cite_3|>": "ss-1353913", "<|multi_cite_4_1|>": "ss-1795098", "<|multi_cite_4_2|>": "ss-1795099", "<|cite_5|>": "ss-784978", "<|multi_cite_32_1|>": "arxiv-159205", "<|multi_cite_32_2|>": "arxiv-269863", "<|cite_6|>": "arxiv-258843", "<|cite_7|>": "ss-2484065", "<|cite_8|>": "ss-1976665", "<|cite_9|>": "ss-1968274", "<|cite_10|>": "ss-2492626", "<|cite_11|>": "ss-2555145", "<|multi_cite_12_1|>": "ss-1353913", "<|multi_cite_12_2|>": "ss-1522931", "<|multi_cite_12_3|>": "ss-1522929", "<|cite_13|>": "ss-905400", "<|multi_cite_14_1|>": "ss-1347310", "<|multi_cite_14_2|>": "ss-684158", "<|multi_cite_15_1|>": "ss-684161", "<|multi_cite_15_2|>": "ss-1522931", "<|cite_16|>": "ss-980871", "<|multi_cite_17_1|>": "ss-684159", "<|multi_cite_17_2|>": "ss-1353913", "<|multi_cite_18_1|>": "ss-1522931", "<|multi_cite_18_2|>": "ss-1522929", "<|cite_19|>": "arxiv-159205", "<|multi_cite_33_1|>": "ss-1110375", "<|multi_cite_33_2|>": "arxiv-203027", "<|multi_cite_33_3|>": "ss-766133", "<|multi_cite_33_4|>": "arxiv-230723", "<|cite_20|>": "arxiv-230723", "<|cite_21|>": "ss-766133", "<|cite_22|>": "arxiv-269863", "<|cite_23|>": "ss-1194267", "<|cite_34|>": "arxiv-258843", "<|multi_cite_35_1|>": "arxiv-129997", "<|multi_cite_35_2|>": "arxiv-119156", "<|multi_cite_36_1|>": "arxiv-100002", "<|multi_cite_36_2|>": "arxiv-130763", "<|multi_cite_37_1|>": "arxiv-144006", "<|multi_cite_37_2|>": "arxiv-145038", "<|cite_24|>": "arxiv-144006", "<|cite_25|>": "arxiv-269863", "<|cite_26|>": "arxiv-278243", "<|cite_27|>": "arxiv-293842", "<|cite_28|>": "arxiv-278243", "<|cite_29|>": "arxiv-293842", "<|cite_30|>": "arxiv-118717", "<|cite_31|>": "arxiv-269863"} |
1811.00625 | <|paper_start|> Title: Incorporating Structured Commonsense Knowledge in Story Completion
Abstract: Incorporating Structured Commonsense Knowledge in Story Completion: The ability to select an appropriate story ending is the first step towards perfect narrative comprehension. Story ending prediction requires not only the explicit clues within the context, but also the implicit knowledge (such as commonsense) to construct a reasonable and consistent story. However, most previous approaches do not explicitly use background commonsense knowledge. We present a neural story ending selection model that integrates three types of information: narrative sequence, sentiment evolution and commonsense knowledge. Experiments show that our model outperforms state-of-the-art approaches on a public dataset, ROCStory Cloze Task , and the performance gain from adding the additional commonsense knowledge is significant.
Introduction
Narrative is a fundamental form of representation in human language and culture. Stories connect individuals and deliver experience, emotions and knowledge. Narrative comprehension has attracted long-standing interests in natural language processing (NLP) <|cite_start|> (Reference: Episodic Logic Meets Little Red Riding Hood: A Comprehensive, Natural Representation for Language Un: We describe a comprehensive framework for narrative understanding based on Episodic Logic (EL). This situational logic was developed and implemented as a semantic representation and commonsense knowledge representation that would serve the full range of interpretive and inferential needs of general NLU. The most distinctive feature of EL is its natural language-like expressiveness. It allows for generalized quantifiers, lambda abstraction, sentence and predicate modifiers, sentence and predicate reification, intensional predicates (corresponding to wanting, believing, making, etc.), unreliable generalizations, and perhaps most importantly, explicit situational variables (denoting episodes, events, states of affairs, etc.) linked to arbitrary formulas that describe them. These allow episodes to be explicitly related in terms of part-whole, temporal and causal relations. Episodic logical form is easily computed from surface syntax and lends itself to effective inference. The Centrality of Representation in NLP Language understanding is an organic phenomenon, and the various stages or facets of the language understanding process — parsing, computing a representation, making inferences, etc. — should not be considered in isolation from each other. For instance, both during the computation of utterance meaning and upon its completion, a great deal of “spontaneous,” input-driven inferencing is presumed to occur, working out plausible interpretations and consequences based on the discourse interpreted so far, and on meaning postulates and world knowledge. This includes computing unique referents for referring expressions, predictions, and explanations which ultimately give a causally coherent elaboration of what has been said. Therefore, an essential requirement is that the representation support such inferences and the knowledge behind them. It should do so in a way that is both intuitively transparent and analyzable in terms of a formal notion of interpretation. The formal interpretability of the representation allows us to examine in detail whether it captures meanings as intended, and whether proposed inference rules are semantically justifiable. These considerations point to the centrality of the issue of representation. The ease of mapping from syntax to a semantic representation, “deindexing” (amalgamating the context information into the representation of an utterance so that the resulting representation becomes context-independent), and performing inferences all depend on the representation used. A basic methodological assumption of our work is that these multiple demands on the representation are best met by using a highly expressive logic closely related to NL itself. The possibility of handling tense, causes, facts, modifiers, propositions, beliefs, etc., simply and directly depends on the expressiveness of the representation. To see the importance of this issue, let us consider the following excerpt from the story of Little Red Riding Hood . In our later discussion of test scenarios, the wording is slightly different, as we were rather haphazardly using several children’s books. One source was (Perrault, 1961).) <|cite_end|>, and is widely applicable to areas such as content creation. Enabling machines to understand narrative is also an important first step towards real intelligence. Previous studies on narrative comprehension include character roles identification <|cite_start|> (Reference: Narrative hermeneutic circle: Improving character role identification from natural language text via feedback loops: While most natural language understanding systems rely on a pipeline-based architecture, certain human text interpretation methods are based on a cyclic process between the whole text and its parts: the hermeneutic circle. In the task of automatically identifying characters and their narrative roles, we propose a feedback-loop-based approach where the output of later modules of the pipeline is fed back to earlier ones. We analyze this approach using a corpus of 21 Russian folktales. Initial results show that feeding back high-level narrative information improves the performance of some NLP tasks.) <|cite_end|>, narratives schema construction <|cite_start|> (Reference: {Unsupervised Learning of Narrative Schemas and Their Participants: We describe an unsupervised system for learning narrative schemas, coherent sequences or sets of events (arrested(POLICE, SUSPECT), convicted(JUDGE, SUSPECT)) whose arguments are filled with participant semantic roles defined over words (Judge = {judge, jury, court}, Police = {police, agent, authorities}). Unlike most previous work in event structure or semantic role learning, our system does not use supervised techniques, hand-built knowledge, or predefined classes of events or roles. Our unsupervised learning algorithm uses coreferring arguments in chains of verbs to learn both rich narrative event structure and argument roles. By jointly addressing both tasks, we improve on previous results in narrative/frame learning and induce rich frame-specific semantic roles.) <|cite_end|>, and plot pattern identification <|cite_start|> (Reference: Macroanalysis: Digital Methods and Literary history: In this volume, Matthew L. Jockers introduces readers to large-scale literary computing and the revolutionary potential of macroanalysis--a new approach to the study of the literary record designed for probing the digital-textual world as it exists today, in digital form and in large quantities. Using computational analysis to retrieve key words, phrases, and linguistic patterns across thousands of texts in digital libraries, researchers can draw conclusions based on quantifiable evidence regarding how literary trends are employed over time, across periods, within regions, or within demographic groups, as well as how cultural, historical, and societal linkages may bind individual authors, texts, and genres into an aggregate literary culture. Moving beyond the limitations of literary interpretation based on the "close-reading" of individual works, Jockers describes how this new method of studying large collections of digital material can help us to better understand and contextualize the individual works within those collections.) <|cite_end|>. However, their main focus is on analyzing the stories themselves. In contrast, we concentrate on training machines to predict the end of the stories. Story completion tasks rely not only on the logic of the story itself, but also requires implicit \emph{commonsense knowledge} outside the story. To understand stories, human can use the information from both the story itself and other implicit sources such as commonsense knowledge and normative social behaviors <|cite_start|> (Reference: Story Understanding Through Multi-representation Model Construction: We present an implemented model of story understanding and apply it to the understanding of a children's story. We argue that understanding a story consists of building multi-representation models of the story and that story models are efficiently constructed using a satisfiability solver. We present a computer program that contains multiple representations of commonsense knowledge, takes a narrative as input, transforms the narrative and representations of commonsense knowledge into a satisfiability problem, runs a satisfiability solver, and produces models of the story as output. The narrative, models, and representations are expressed in the language of Shanahan's event calculus.) <|cite_end|>. In this paper, we propose to imitate such behaviors to incorporate structured commonsense knowledge to aid the story ending prediction.
Recently, <|cite_start|> (Reference: {LSDSem} 2017 shared task: The story cloze test: The LSDSem’17 shared task is the Story Cloze Test, a new evaluation for story understanding and script learning. This test provides a system with a four-sentence story and two possible endings, and the system must choose the correct ending to the story. Successful narrative understanding (getting closer to human performance of 100%) requires systems to link various levels of semantics to commonsense knowledge. A total of eight systems participated in the shared task, with a variety of approaches including.) <|cite_end|> introduced a ROCStories dataset as a benchmark for evaluating models' ability to understand the narrative structures of a story, where the model is asked to select the correct ending from two candidates for a given story. To solve this task, both traditional machine learning approaches <|cite_start|> (Reference: Story cloze task: Uw nlp system: This paper describes University of Washington NLP’s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task—the Story Cloze Task. Our system is a linear classifier with a variety of features, including both the scores of a neural language model and style features. We report 75.2% accuracy on the task. A further discussion of our results can be found in Schwartz et al. (2017).) <|cite_end|> and neural network models <|cite_start|> (Reference: Pay attention to the ending:strong neural baselines for the roc story cloze task: We consider the ROC story cloze task (Mostafazadeh et al., 2016) and present several findings. We develop a model that uses hierarchical recurrent networks with attention to encode the sentences in the story and score candidate endings. By discarding the large training set and only training on the validation set, we achieve an accuracy of 74.7%. Even when we discard the story plots (sentences before the ending) and only train to choose the better of two endings, we can still reach 72.5%. We then analyze this “ending-only” task setting. We estimate human accuracy to be 78% and find several types of clues that lead to this high accuracy, including those related to sentiment, negation, and general ending likelihood regardless of the story context.) <|cite_end|> have been used. Some works also exploit information such as sentiment and topic words <|cite_start|> (Reference: Story Comprehension for Predicting What Happens Next: Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense. In this paper, we present a story comprehension model that explores three distinct semantic aspects: (i) the sequence of events described in the story, (ii) its emotional trajectory, and (iii) its plot consistency. We judge the model’s understanding of real-world stories by inquiring if, like humans, it can develop an expectation of what will happen next in a given story. Specifically, we use it to predict the correct ending of a given short story from possible alternatives. The model uses a hidden variable to weigh the semantic aspects in the context of the story. Our experiments demonstrate the potential of our approach to characterize these semantic aspects, and the strength of the hidden variable based approach. The model outperforms the state-of-the-art approaches and achieves best results on a publicly available dataset.) <|cite_end|> and event frames <|cite_start|> (Reference: A multi-attention based neural network with external knowledge for story ending predicting task: Enabling a mechanism to understand a temporal story and predict its ending is an interesting issue that has attracted considerable attention, as in case of the ROC Story Cloze Task (SCT). In this paper, we develop a multi-attention-based neural network (MANN) with well-designed optimizations, like Highway Network, and concatenated features with embedding representations into the hierarchical neural network model. Considering the particulars of the specific task, we thoughtfully extend MANN with external knowledge resources, exceeding state-of-the-art results obviously. Furthermore, we develop a thorough understanding of our model through a careful hand analysis on a subset of the stories. We identify what traits of MANN contribute to its outperformance and how external knowledge is obtained in such an ending prediction task.) <|cite_end|>. Recently, there has been work <|cite_start|> (Reference: Improving language understanding by
generative pre-training: Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).) <|cite_end|> that leverages large unlabeled corpus, like the BooksCorpus <|cite_start|> (Reference: Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books: Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for.) <|cite_end|> dataset, to improve the performance. However, none of them explicitly uses structured commonsense knowledge, which humans would naturally incorporate to improve model performance.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Example.png}
\caption{An example story }\label{example_story}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{KG.png}
\caption{Clues in ConceptNet}\label{KG}
\end{subfigure}
\caption{(a) shows an example story from ROCStories dataset, words in colors are key-words. (b) shows the key-words and their relations in ConceptNet Knowledge Graph}\label{Example}
\end{figure}
\begin{figure*}[tp]
\centering
\includegraphics[width=0.7\textwidth]{Model.png}
\caption{Our proposed model architecture. The inputs: $S_1$ through $S_4$ denote the story body, and $e_i$ ($i=1,2$) denotes two candidate endings. The bottom-left component encodes sentiment evolution information (green), the top-left component models the narrative sequence (yellow), and the top-right component integrates commonsense knowledge (blue). The combination gate in the bottom-right integrates all three types of information and outputs the probability on which ending is correct.}\label{Models}
\end{figure*}
Figure \ref{Example}(a) shows a typical example in ROCStories dataset: a story about Dan and his parents. The blue words are key-words in the body of the story, and the red word is the key-word in the correct story ending. Figure \ref{Example}(b) shows the (implicit) relations among these key-words, which are obtained as a subgraph from ConceptNet <|cite_start|> (Reference: Conceptnet 5.5: An Open Multilingual Graph of General Knowledge: Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.) <|cite_end|>, a commonsense knowledge base. By incorporating such structured external commonsense knowledge, we are able to discover strong associations between these keywords and correctly predict the story ending. Note that these associations are not available from the story itself.
To solve the story completion task, we propose a neural network model that integrates three types of information: (i) narrative sequence, (ii) sentiment evolution, and (iii) commonsense knowledge. The clues in narrative chain are captured by a transformer decoder, constructed from a pretrained language model. The sentiment prediction is obtained by using a LSTM model. Additionally, the commonsense knowledge is extracted from an existing structured knowledge base, ConceptNet. In particular, we use a combination gate to integrate all the information and train the model in an end-to-end manner. Experiments demonstrate the improved performance of our model on the task.
Related Work
Our work on story completion is closely related to several research areas such as reading comprehension, sentiment analysis and commonsense knowledge integration, which will be briefly reviewed as below.
\textbf{Reading Comprehension} is the ability to process text, understand its meaning, and to integrate it with what the readers already know. It has been an important field in NLP for a long time. The SQuAD dataset <|cite_start|> (Reference: Know What You Don't Know: Unanswerable Questions for SQuAD: Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuAD 2.0, the latest version of the Stanford Question Answering Dataset (SQuAD). SQuAD 2.0 combines existing SQuAD data with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD 2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuAD 2.0 is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD 1.1 achieves only 66% F1 on SQuAD 2.0.) <|cite_end|> presents a task to locate the correct answer to a question in a context document and recognizes unanswerable questions. The RACE dataset <|cite_start|> (Reference: RACE: Large-scale ReAding Comprehension Dataset From Examinations: We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at https://github.com/qizhex/RACE_AR_baselines.) <|cite_end|>, which is constructed from Chinese Students English Examination, introduces another task that requires not only retrieval but also reasoning. Usually they are solved by match-based model like QANET <|cite_start|> (Reference: QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension: Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.) <|cite_end|>, hierarchical attention model like HAF <|cite_start|> (Reference: Hierarchical attention flow for multiple-choice reading comprehension.: In this paper, we focus on multiple-choice reading comprehension which aims to answer a question given a passage and multiple candidate options. We present the hierarchical attention flow to adequately leverage candidate options to model the interactions among passages, questions and candidate options. We observe that leveraging candidate options to boost evidence gathering from the passages play a vital role in this task, which is ignored in previous works. In addition, we explicitly model the option correlations with attention mechanism to obtain better option representations, which are further fed into a bilinear layer to obtain the ranking score for each option. On a large-scale multiple-choice reading comprehension dataset (i.e. the RACE dataset), the proposed model outperforms two previous neural network baselines on both RACE-M and RACE-H subsets and yields the state-of-the-art overall results.) <|cite_end|>, and dynamic fusion based model like DFN <|cite_start|> (Reference: Dynamic Fusion Networks for Machine Reading Comprehension: This paper presents a novel neural model - Dynamic Fusion Network (DFN), for machine reading comprehension (MRC). DFNs differ from most state-of-the-art models in their use of a dynamic multi-strategy attention process, in which passages, questions and answer candidates are jointly fused into attention vectors, along with a dynamic multi-step reasoning module for generating answers. With the use of reinforcement learning, for each input sample that consists of a question, a passage and a list of candidate answers, an instance of DFN with a sample-specific network architecture can be dynamically constructed by determining what attention strategy to apply and how many reasoning steps to take. Experiments show that DFNs achieve the best result reported on RACE, a challenging MRC dataset that contains real human reading questions in a wide variety of types. A detailed empirical analysis also demonstrates that DFNs can produce attention vectors that summarize information from questions, passages and answer candidates more effectively than other popular MRC models.) <|cite_end|>. Also there exists more relevant research on story comprehension such as event understanding of narrative plots <|cite_start|> (Reference: {Unsupervised Learning of Narrative Schemas and Their Participants: We describe an unsupervised system for learning narrative schemas, coherent sequences or sets of events (arrested(POLICE, SUSPECT), convicted(JUDGE, SUSPECT)) whose arguments are filled with participant semantic roles defined over words (Judge = {judge, jury, court}, Police = {police, agent, authorities}). Unlike most previous work in event structure or semantic role learning, our system does not use supervised techniques, hand-built knowledge, or predefined classes of events or roles. Our unsupervised learning algorithm uses coreferring arguments in chains of verbs to learn both rich narrative event structure and argument roles. By jointly addressing both tasks, we improve on previous results in narrative/frame learning and induce rich frame-specific semantic roles.) <|cite_end|>, character personas <|cite_start|> (Reference: Narrative hermeneutic circle: Improving character role identification from natural language text via feedback loops: While most natural language understanding systems rely on a pipeline-based architecture, certain human text interpretation methods are based on a cyclic process between the whole text and its parts: the hermeneutic circle. In the task of automatically identifying characters and their narrative roles, we propose a feedback-loop-based approach where the output of later modules of the pipeline is fed back to earlier ones. We analyze this approach using a corpus of 21 Russian folktales. Initial results show that feeding back high-level narrative information improves the performance of some NLP tasks.) <|cite_end|> and inter-character relationships <|cite_start|> (Reference: Feuding families and former friends: Unsupervised learning for dynamic fictional relationships: Understanding how a fictional relationship between two characters changes over time (e.g., from best friends to sworn enemies) is a key challenge in digital humanities scholarship. We present a novel unsupervised neural network for this task that incorporates dictionary learning to generate interpretable, accurate relationship trajectories. While previous work on characterizing literary relationships relies on plot summaries annotated with predefined labels, our model jointly learns a set of global relationship descriptors as well as a trajectory over these descriptors for each relationship in a dataset of raw text from novels. We find that our model learns descriptors of events (e.g., marriage or murder) as well as interpersonal states (love, sadness). Our model outperforms topic model baselines on two crowdsourced tasks, and we also find interesting correlations to annotations in an existing dataset.) <|cite_end|>.
\textbf{Sentiment Analysis} aims to determine the attitude of a speaker (or a writer) with respect to some topic, the overall contextual polarity, or emotional reaction to a document, interaction or event. There have been rich studies on this field, such as learning word vectors for sentiment analysis <|cite_start|> (Reference: {Learning Word Vectors for Sentiment Analysis: Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area.) <|cite_end|> and recognizing contextual polarity in a phrase-level <|cite_start|> (Reference: Recognizing Contextual Polarity in Phrase-level Sentiment Analysis: This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.) <|cite_end|>. Recently, researchers studied large-scale sentiment analysis across news and blogs <|cite_start|> (Reference: Large-Scale Sentiment Analysis for News and Blogs (system demonstration): Newspapers and blogs express opinion of news entities (people, places, things) while reporting on recent events. We present a system that assigns scores indicating positive or negative opinion to each distinct entity in the text corpus. Our system consists of a sentiment identication phase, which associates expressed opinions with each relevant entity, and a sentiment aggregation and scoring phase, which scores each entity relative to others in the same class. Finally, we evaluate the signicance of our scoring techniques over large corpus of news and blogs.) <|cite_end|>, and also studied opinion mining on twitter <|cite_start|> (Reference: {Twitter as a corpus for sentiment analysis and opinion mining: Microblogging today has become a very popular communication tool among Internet users. Millions of users share opinions on different aspects of life everyday. Therefore microblogging web-sites are rich sources of data for opinion mining and sentiment analysis. Because microblogging has appeared relatively recently, there are a few research works that were devoted to this topic. In our paper, we focus on using Twitter, the most popular microblogging platform, for the task of sentiment analysis. We show how to automatically collect a corpus for sentiment analysis and opinion mining purposes. We perform linguistic analysis of the collected corpus and explain discovered phenomena. Using the corpus, we build a sentiment classifier, that is able to determine positive, negative and neutral sentiments for a document. Experimental evaluations show that our proposed techniques are efficient and performs better than previously proposed methods. In our research, we worked with English, however, the proposed technique can be used with any other language.) <|cite_end|>. Additionally, there have been studies focused on joint learning for better performance, such as detecting sentiment and topic simultaneously from text <|cite_start|> (Reference: Joint Sentiment/Topic Model for Sentiment Analysis: Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.) <|cite_end|>.
\textbf{Commonsense Knowledge Integration} If machines receive information from a commonsense knowledge base, they become more powerful for many tasks like reasoning <|cite_start|> (Reference: Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects: Human vision greatly benefits from the information about sizes of objects. The role of size in several visual reasoning tasks has been thoroughly explored in human perception and cognition. However, the impact of the information about sizes of objects is yet to be determined in AI. We postulate that this is mainly attributed to the lack of a comprehensive repository of size information. In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web. By maximizing the joint likelihood of textual and visual observations, our method learns reliable relative size estimates, with no explicit human supervision. We introduce the relative size dataset and show that our method outperforms competitive textual and visual baselines in reasoning about size comparisons.) <|cite_end|>, dialogue generation <|cite_start|> (Reference: Knowledge Diffusion for Neural Dialogue Generation: End-to-end neural dialogue generation has shown promising results recently, but it does not employ knowledge to guide the generation and hence tends to generate short, general, and meaningless responses. In this paper, we propose a neural knowledge diffusion (NKD) model to introduce knowledge into dialogue generation. This method can not only match the relevant facts for the input utterance but diffuse them to similar entities. With the help of facts matching and entity diffusion, the neural dialogue generation is augmented with the ability of convergent and divergent thinking over the knowledge base. Our empirical study on a real-world dataset prove that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats. The experiment results also show that our model outperforms competitive baseline models significantly.) <|cite_end|> and cloze style reading comprehension <|cite_start|> (Reference: Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge.: We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting. Instead of relying only on document-to-question interaction or discrete features as in prior work, our model attends to relevant external knowledge and combines this knowledge with the context representation before inferring the answer. This allows the model to attract and imply knowledge from an external knowledge source that is not explicitly stated in the text, but that is relevant for inferring the answer. Our model improves results over a very strong baseline on a hard Common Nouns dataset, making it a strong competitor of much more complex models. By including knowledge explicitly, our model can also provide evidence about the background knowledge used in the RC process.) <|cite_end|>. Related works include <|cite_start|> (Reference: Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects: Human vision greatly benefits from the information about sizes of objects. The role of size in several visual reasoning tasks has been thoroughly explored in human perception and cognition. However, the impact of the information about sizes of objects is yet to be determined in AI. We postulate that this is mainly attributed to the lack of a comprehensive repository of size information. In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web. By maximizing the joint likelihood of textual and visual observations, our method learns reliable relative size estimates, with no explicit human supervision. We introduce the relative size dataset and show that our method outperforms competitive textual and visual baselines in reasoning about size comparisons.) <|cite_end|>, which builds a knowledge graph and uses it to deduce the size of objects <|cite_start|> (Reference: Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects: Human vision greatly benefits from the information about sizes of objects. The role of size in several visual reasoning tasks has been thoroughly explored in human perception and cognition. However, the impact of the information about sizes of objects is yet to be determined in AI. We postulate that this is mainly attributed to the lack of a comprehensive repository of size information. In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web. By maximizing the joint likelihood of textual and visual observations, our method learns reliable relative size estimates, with no explicit human supervision. We introduce the relative size dataset and show that our method outperforms competitive textual and visual baselines in reasoning about size comparisons.) <|cite_end|>, in addiiton to <|cite_start|> (Reference: Flexible End-to-End Dialogue System for Knowledge Grounded Conversation: In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-of-vocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-of-vocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.) <|cite_end|>, in which a music knowledge graph is built for a single round dialogue system. There are several ways to incorporate external knowledge base (e.g., ConceptNet). For example, <|cite_start|> (Reference: ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge: This paper describes Luminoso's participation in SemEval 2017 Task 2, "Multilingual and Cross-lingual Semantic Word Similarity", with a system based on ConceptNet. ConceptNet is an open, multilingual knowledge graph that focuses on general knowledge that relates the meanings of words and phrases. Our submission to SemEval was an update of previous work that builds high-quality, multilingual word embeddings from a combination of ConceptNet and distributional semantics. Our system took first place in both subtasks. It ranked first in 4 out of 5 of the separate languages, and also ranked first in all 10 of the cross-lingual language pairs.) <|cite_end|> uses a knowledge based word embedding, <|cite_start|> (Reference: Augmenting End-to-End Dialog Systems with Commonsense Knowledge: Building dialog agents that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human responses in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialog. Our model represents the first attempt to integrating a large commonsense knowledge base into end-to-end conversational models. In the retrieval-based scenario, we propose the Tri-LSTM model to jointly take into account message and commonsense for selecting an appropriate response. Our experiments suggest that the knowledge-augmented models are superior to their knowledge-free counterparts in automatic evaluation.) <|cite_end|> employs tri-LSTMs to encode the knowledge triple, and <|cite_start|> (Reference: Commonsense Knowledge Aware Conversation Generation with Graph Attention: Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs. Experiments show that the proposed model can generate more appropriate and informative responses than state-of-the-art baselines.) <|cite_end|> and <|cite_start|> (Reference: Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge.: We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting. Instead of relying only on document-to-question interaction or discrete features as in prior work, our model attends to relevant external knowledge and combines this knowledge with the context representation before inferring the answer. This allows the model to attract and imply knowledge from an external knowledge source that is not explicitly stated in the text, but that is relevant for inferring the answer. Our model improves results over a very strong baseline on a hard Common Nouns dataset, making it a strong competitor of much more complex models. By including knowledge explicitly, our model can also provide evidence about the background knowledge used in the RC process.) <|cite_end|> apply graph attention embedding to encode sub-graphs from a knowledge base. However, their work does not involve narrative completion.
\textbf{Story Completion}
Traditional machine learning methods have been used to solve ROCStory Cloze Task such as <|cite_start|> (Reference: Story cloze task: Uw nlp system: This paper describes University of Washington NLP’s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task—the Story Cloze Task. Our system is a linear classifier with a variety of features, including both the scores of a neural language model and style features. We report 75.2% accuracy on the task. A further discussion of our results can be found in Schwartz et al. (2017).) <|cite_end|>. To improve the performance, features like topic words and sentiment score are also extracted and incorporated <|cite_start|> (Reference: Story Comprehension for Predicting What Happens Next: Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense. In this paper, we present a story comprehension model that explores three distinct semantic aspects: (i) the sequence of events described in the story, (ii) its emotional trajectory, and (iii) its plot consistency. We judge the model’s understanding of real-world stories by inquiring if, like humans, it can develop an expectation of what will happen next in a given story. Specifically, we use it to predict the correct ending of a given short story from possible alternatives. The model uses a hidden variable to weigh the semantic aspects in the context of the story. Our experiments demonstrate the potential of our approach to characterize these semantic aspects, and the strength of the hidden variable based approach. The model outperforms the state-of-the-art approaches and achieves best results on a publicly available dataset.) <|cite_end|>. Neural network models have also been applied to this task (e.g., <|cite_start|> (Reference: {Learning Deep Structured Semantic Models for Web Search using Clickthrough Data: Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.) <|cite_end|> and <|cite_start|> (Reference: Pay attention to the ending:strong neural baselines for the roc story cloze task: We consider the ROC story cloze task (Mostafazadeh et al., 2016) and present several findings. We develop a model that uses hierarchical recurrent networks with attention to encode the sentences in the story and score candidate endings. By discarding the large training set and only training on the validation set, we achieve an accuracy of 74.7%. Even when we discard the story plots (sentences before the ending) and only train to choose the better of two endings, we can still reach 72.5%. We then analyze this “ending-only” task setting. We estimate human accuracy to be 78% and find several types of clues that lead to this high accuracy, including those related to sentiment, negation, and general ending likelihood regardless of the story context.) <|cite_end|>), which use LSTM to encode different parts of the story and calculate their similarities. In addition, <|cite_start|> (Reference: A multi-attention based neural network with external knowledge for story ending predicting task: Enabling a mechanism to understand a temporal story and predict its ending is an interesting issue that has attracted considerable attention, as in case of the ROC Story Cloze Task (SCT). In this paper, we develop a multi-attention-based neural network (MANN) with well-designed optimizations, like Highway Network, and concatenated features with embedding representations into the hierarchical neural network model. Considering the particulars of the specific task, we thoughtfully extend MANN with external knowledge resources, exceeding state-of-the-art results obviously. Furthermore, we develop a thorough understanding of our model through a careful hand analysis on a subset of the stories. We identify what traits of MANN contribute to its outperformance and how external knowledge is obtained in such an ending prediction task.) <|cite_end|> introduces event frame to their model and leverages five different embeddings. Finally, <|cite_start|> (Reference: Improving language understanding by
generative pre-training: Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).) <|cite_end|> develops a transformer model and achieves state-of-the-art performance on ROCStories, where the transformer was pretrained on BooksCorpus (a large unlabeled corpus) and finetuned on ROCStories. <|paper_end|> | [
"<|reference_start|> {Unsupervised Learning of Narrative Schemas and Their Participants: We describe an unsupervised system for learning narrative schemas, coherent sequences or sets of events (arrested(POLICE, SUSPECT), convicted(JUDGE, SUSPECT)) whose arguments are filled with participant semantic roles defined over words (Judge = {judge, jury, court}, Police = {police, agent, authorities}). Unlike most previous work in event structure or semantic role learning, our system does not use supervised techniques, hand-built knowledge, or predefined classes of events or roles. Our unsupervised learning algorithm uses coreferring arguments in chains of verbs to learn both rich narrative event structure and argument roles. By jointly addressing both tasks, we improve on previous results in narrative/frame learning and induce rich frame-specific semantic roles. <|reference_end|>",
"<|reference_start|> Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge.: We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting. Instead of relying only on document-to-question interaction or discrete features as in prior work, our model attends to relevant external knowledge and combines this knowledge with the context representation before inferring the answer. This allows the model to attract and imply knowledge from an external knowledge source that is not explicitly stated in the text, but that is relevant for inferring the answer. Our model improves results over a very strong baseline on a hard Common Nouns dataset, making it a strong competitor of much more complex models. By including knowledge explicitly, our model can also provide evidence about the background knowledge used in the RC process. <|reference_end|>",
"<|reference_start|> Story cloze task: Uw nlp system: This paper describes University of Washington NLP’s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task—the Story Cloze Task. Our system is a linear classifier with a variety of features, including both the scores of a neural language model and style features. We report 75.2% accuracy on the task. A further discussion of our results can be found in Schwartz et al. (2017). <|reference_end|>",
"<|reference_start|> Story Comprehension for Predicting What Happens Next: Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense. In this paper, we present a story comprehension model that explores three distinct semantic aspects: (i) the sequence of events described in the story, (ii) its emotional trajectory, and (iii) its plot consistency. We judge the model’s understanding of real-world stories by inquiring if, like humans, it can develop an expectation of what will happen next in a given story. Specifically, we use it to predict the correct ending of a given short story from possible alternatives. The model uses a hidden variable to weigh the semantic aspects in the context of the story. Our experiments demonstrate the potential of our approach to characterize these semantic aspects, and the strength of the hidden variable based approach. The model outperforms the state-of-the-art approaches and achieves best results on a publicly available dataset. <|reference_end|>"
] | [
2,
35,
36,
37
] | {"<|cite_1|>": "ss-1041179", "<|cite_2|>": "ss-1447780", "<|cite_3|>": "ss-677777", "<|cite_4|>": "ss-976679", "<|cite_5|>": "ss-800199", "<|cite_6|>": "ss-865259", "<|cite_7|>": "ss-766912", "<|cite_8|>": "ss-998390", "<|cite_9|>": "ss-1521774", "<|cite_10|>": "ss-860129", "<|cite_11|>": "ss-986248", "<|cite_12|>": "arxiv-79843", "<|cite_13|>": "ss-1250552", "<|cite_14|>": "arxiv-161988", "<|cite_15|>": "ss-1041180", "<|cite_16|>": "arxiv-156316", "<|cite_17|>": "ss-1041181", "<|cite_18|>": "arxiv-140001", "<|cite_19|>": "ss-677777", "<|cite_20|>": "ss-1447780", "<|cite_21|>": "ss-1515161", "<|cite_22|>": "ss-1923406", "<|cite_23|>": "ss-761964", "<|cite_24|>": "ss-1300303", "<|cite_25|>": "ss-1231296", "<|cite_26|>": "ss-1877685", "<|cite_27|>": "arxiv-91420", "<|cite_28|>": "ss-1267196", "<|cite_29|>": "ss-1041182", "<|cite_30|>": "arxiv-91420", "<|cite_31|>": "arxiv-91420", "<|cite_32|>": "arxiv-134432", "<|cite_33|>": "arxiv-121476", "<|cite_34|>": "arxiv-134772", "<|cite_35|>": "ss-1267193", "<|cite_36|>": "ss-1041182", "<|cite_37|>": "ss-766912", "<|cite_38|>": "ss-1521774", "<|cite_39|>": "ss-1077485", "<|cite_40|>": "ss-998390", "<|cite_41|>": "ss-860129", "<|cite_42|>": "ss-986248"} |
1603.08616 | <|paper_start|> Title: Submodular Variational Inference for Network Reconstruction
Abstract: Submodular Variational Inference for Network Reconstruction: In real-world and online social networks, individuals receive and transmit information in real time. Cascading information transmissions (e.g. phone calls, text messages, social media posts) may be understood as a realization of a diffusion process operating on the network, and its branching path can be represented by a directed tree. The process only traverses and thus reveals a limited portion of the edges. The network reconstruction/inference problem is to infer the unrevealed connections. Most existing approaches derive a likelihood and attempt to find the network topology maximizing the likelihood, a problem that is highly intractable. In this paper, we focus on the network reconstruction problem for a broad class of real-world diffusion processes, exemplified by a network diffusion scheme called respondent-driven sampling (RDS). We prove that under realistic and general models of network diffusion, the posterior distribution of an observed RDS realization is a Bayesian log-submodular model.We then propose VINE (Variational Inference for Network rEconstruction), a novel, accurate, and computationally efficient variational inference algorithm, for the network reconstruction problem under this model. Crucially, we do not assume any particular probabilistic model for the underlying network. VINE recovers any connected graph with high accuracy as shown by our experimental results on real-life networks.
Introduction
The network reconstruction problem, also known as the network inference
problem <|cite_start|> (Reference: Inferring Networks of Diffusion and Influence: Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks. We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.) <|cite_end|> <|cite_start|> (Reference: Estimating Diffusion Network Structures: Recovery Conditions, Sample Complexity & Soft-thresholding Algorithm: Information spreads across social and technological networks, but often the network structures are hidden from us and we only observe the traces left by the diffusion processes, called cascades. Can we recover the hidden network structures from these observed cascades? What kind of cascades and how many cascades do we need? Are there some network structures which are more difficult than others to recover? Can we design efficient inference algorithms with provable guarantees? Despite the increasing availability of cascade data and methods for inferring networks from these data, a thorough theoretical understanding of the above questions remains largely unexplored in the literature. In this paper, we investigate the network structure inference problem for a general family of continuous-time diffusion models using an $l_1$-regularized likelihood maximization framework. We show that, as long as the cascade sampling process satisfies a natural incoherence condition, our framework can recover the correct network structure with high probability if we observe $O(d^3 \log N)$ cascades, where $d$ is the maximum number of parents of a node and $N$ is the total number of nodes. Moreover, we develop a simple and efficient soft-thresholding inference algorithm, which we use to illustrate the consequences of our theoretical results, and show that our framework outperforms other alternatives in practice.) <|cite_end|> <|cite_start|> (Reference: Network inference with confidence from multivariate time series: Networks--collections of interacting elements or nodes--abound in the natural and manmade worlds. For many networks, complex spatiotemporal dynamics stem from patterns of physical interactions unknown to us. To infer these interactions, it is common to include edges between those nodes whose time series exhibit sufficient functional connectivity, typically defined as a measure of coupling exceeding a predetermined threshold. However, when uncertainty exists in the original network measurements, uncertainty in the inferred network is likely, and hence a statistical propagation of error is needed. In this manuscript, we describe a principled and systematic procedure for the inference of functional connectivity networks from multivariate time series data. Our procedure yields as output both the inferred network and a quantification of uncertainty of the most fundamental interest: uncertainty in the number of edges. To illustrate this approach, we apply a measure of linear coupling to simulated data and electrocorticogram data recorded from a human subject during an epileptic seizure. We demonstrate that the procedure is accurate and robust in both the determination of edges and the reporting of uncertainty associated with that determination.) <|cite_end|> <|cite_start|> (Reference: Back to the Past: Source Identification in Diffusion Networks from Partially Observed Cascades: When a piece of malicious information becomes rampant in an information diffusion network, can we identify the source node that originally introduced the piece into the network and infer the time when it initiated this? Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred. This is a very challenging problem since typically only incomplete traces are observed and we need to unroll the incomplete traces into the past in order to pinpoint the source. In this paper, we tackle this problem by developing a two-stage framework, which first learns a continuous-time diffusion network model based on historical diffusion traces and then identifies the source of an incomplete diffusion trace by maximizing the likelihood of the trace under the learned model. Experiments on both large synthetic and real-world data show that our framework can effectively go back to the past, and pinpoint the source node and its initiation time significantly more accurately than previous state-of-the-arts.) <|cite_end|> <|cite_start|> (Reference: {The Link-prediction Problem for Social Networks: Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the "proximity" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.) <|cite_end|> <|cite_start|> (Reference: Discovering Latent Network Structure in Point Process Data: Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts. Most often in network analysis, the edges are given. However, in many systems it is difficult or impossible to measure the network directly. Examples of latent networks include economic interactions linking financial instruments and patterns of reciprocity in gang violence. In these cases, we are limited to noisy observations of events associated with each node. To enable analysis of these implicit networks, we develop a probabilistic model that combines mutually-exciting point processes with random graph models. We show how the Poisson superposition principle enables an elegant auxiliary variable formulation and a fully-Bayesian, parallel inference algorithm. We evaluate this new model empirically on several datasets.) <|cite_end|> <|cite_start|> (Reference: Inferring network topology from complex dynamics: Inferring the network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method for inferring the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is hardly restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology from observing a time series of state variables only. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstructing both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of external noise that distorts the original dynamics substantially. The method provides a conceptually new step towards reconstructing a variety of real-world networks, including gene and protein interaction networks and neuronal circuits.) <|cite_end|> <|cite_start|> (Reference: Topology Discovery of Sparse Random Graphs With Few Participants: We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants.) <|cite_end|> <|cite_start|> (Reference: Network completion and survey sampling: We study the problem of learning the topology of an undirected network by observing a random subsample. Specifically, the sample is chosen by randomly selecting a fixed number of vertices, and for each we are allowed to observe all edges it is incident with. We analyze a general formalization of learning from such samples, and derive confidence bounds on the number of differences between the true and learned topologies, as a function of the number of observed mistakes and the algorithm’s bias. In addition to this general analysis, we also analyze a variant of the problem under a stochastic block model assumption.) <|cite_end|> <|cite_start|> (Reference: The network completion problem: inferring missing nodes and edges in networks: Network structures, such as social networks, web graphs and networks from systems biology, play important roles in many areas of science and our everyday lives. In order to study the networks one needs to first collect reliable large scale network data. While the social and information networks have become ubiquitous, the challenge of collecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, re-estimate the parameters and so on. We combine the EM with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parameters as well as the inference about missing nodes and edges of the network. Experiments on synthetic and several real-world networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical link-prediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes.) <|cite_end|>,
arises naturally in a variety of scenarios and has been the focus of great
research interest. In the most general setting, we assume
there is an underlying unknown graph structure that represents
the connections between network subjects, and that
we can only observe single or multiple diffusion processes over the
graph. Usually propagation of the diffusion process can only occur over network edges;
however, there exist many hidden ties untraversed or unrevealed by the
diffusion processes, and the goal is to infer such hidden ties.
This network reconstruction problem arises in several empirical topic areas:
\textbf{Blogosphere. }Millions of authors in the worldwide blogosphere
write articles every day, each triggering a diffusion process of reposts over
the underlying blog network structure.
The diffusion process initiated
by an article can be represented by a directed tree. The observed
data consist of multiple directed trees and it is of great interest
to understand the underlying structure of information flow <|cite_start|> (Reference: Uncovering the structure and temporal dynamics of information propagation: Abstract Time plays an essential role in the diffusion of information, influence, and disease over networks. In many cases we can only observe when a node is activated by a contagion—when a node learns about a piece of information, makes a decision, adopts a new behavior, or becomes infected with a disease. However, the underlying network connectivity and transmission rates between nodes are unknown. Inferring the underlying diffusion dynamics is important because it leads to new insights and enables forecasting, as well as influencing or containing information propagation. In this paper we model diffusion as a continuous temporal process occurring at different rates over a latent, unobserved network that may change over time. Given information diffusion data, we infer the edges and dynamics of the underlying network. Our model naturally imposes sparse solutions and requires no parameter tuning. We develop an efficient inference algorithm that uses stochastic convex optimization to compute online estimates of the edges and transmission rates. We evaluate our method by tracking information diffusion among 3.3 million mainstream media sites and blogs, and experiment with more than 179 million different instances of information spreading over the network in a one-year period. We apply our network inference algorithm to the top 5,000 media sites and blogs and report several interesting observations. First, information pathways for general recurrent topics are more stable across time than for on-going news events. Second, clusters of news media sites and blogs often emerge and vanish in a matter of days for on-going news events. Finally, major events, for example, large scale civil unrest as in the Libyan civil war or Syrian uprising, increase the number of information pathways among blogs, and also increase the network centrality of blogs and social media sites.) <|cite_end|>. Following inference of the network,
researchers may apply community detection algorithms to,
e.g.,
aggregate and further analyze blog sites of different political views.
\textbf{Online social networks. }Weibo is a Twitter-like microblogging
service in China <|cite_start|> (Reference: A comparative study of users' microblogging behavior on sina weibo and twitter: ) <|cite_end|> where users
post
microblogs and
repost those from other users they follow.
An explicit repost chain, which indicates the
sequence of users that a post passes through, is attached to each
repost on Weibo.
Similarly, each
post initiates a diffusion process. By observing several realizations
of diffusion processes, researchers seek to understand the underlying
social and information network structure.
\textbf{Respondent-driven sampling. }
Respondent-driven sampling (RDS)
is a chain-referral peer recruitment procedure that is widely used in epidemiology
for studying hidden and hard-to-reach human populations when random sampling
is impossible <|cite_start|> (Reference: Respondent-driven sampling: a new approach to the study of hidden populations.: A population is “hidden” when no sampling frame exists and public acknowledgment of membership in the population is potentially threatening. Accessing such populations is difficult because standard probability sampling methods produce low response rates and responses that lack candor. Existing procedures for sampling these populations, including snowball and other chain-referral samples, the key-informant approach, and targeted sampling, introduce well-documented biases into their samples. This paper introduces a new variant of chain-referral sampling, respondent-driven sampling, that employs a dual system of structured incentives to overcome some of the deficiencies of such samples. A theoretic analysis, drawing on both Markov-chain theory and the theory of biased networks, shows that this procedure can reduce the biases generally associated with chain-referral methods. The analysis includes a proof showing that even though sampling begins with an arbitrarily chosen set of initial subjects, as do most chain-referral samples, the composition of the ultimate sample is wholly independent of those initial subjects. The analysis also includes a theoretic specification of the conditions under which the procedure yields unbiased samples. Empirical results, based on surveys of 277 active drug injectors in Connecticut, support these conclusions. Finally, the conclusion discusses how respondent- driven sampling can improve both network sampling and ethnographic 44 investigation.) <|cite_end|>.
RDS is commonly used in studies of
men who have sex with men, homeless people, sex workers,
drug users, and other groups at high risk for HIV infection <|cite_start|> (Reference: Mapping a social network of heterosexuals at high risk for HIV infection: Objective:To determine how heterosexuals at risk for HIV infection interconnect in social networks and how such relationships affect HIV transmission. Design:Cross-sectional study with face-to-face interviews to ascertain sociosexual connections; serologic testing. Participants:Prostitute women (n=133), their paying (n=129) and non-paying (n=47) male partners; injecting drug users (n= 200) and their sex partners (n=41). Participants were recruited in sexually transmitted disease and methadone clinics, an HIV-testing site, and through street outreach in Colorado Springs, Colorado, USA. Main outcome measures:Reported behaviors, risk perceptions, sociosexual linkages, and HIV prevalence. Results:Respondents were well informed, but reported engaging in high-risk behaviors frequently. Nevertheless, over 70% of respondents perceived themselves to be at low risk for HIV infection. The 595 respondents identified a social network of 5162 people to which they belonged. Network analytic methods indicated 147 separate connected components of this network; eight of the 19 HIV-positive individuals in the network were located in smaller components remote from the largest connected component. Conclusion:The isolated position of HIV-positive individuals may serve as a barrier to HIV transmission and may account for the lack of diffusion of HIV in heterosexual populations in this region. Network analysis appears useful for understanding the dynamics of disease transmission and warrants further development as a tool for intervention and control.) <|cite_end|>.
An RDS recruitment process is also a diffusion process over
an unknown social network structure, and the diffusion tree (who recruited whom)
is revealed by the observed
process. In addition, when a subject enters the survey, she reports her total number of acquaintances in the population, or graph-theoretically speaking, her degree in the underlying network.
Understanding the
underlying network structure is a topic of great interest to epidemiologists and sociologists
who wish to study the transmission of infectious diseases, and the propagation of health-related
behaviors in the networks of high-risk groups <|cite_start|> (Reference: The graphical structure of respondent-driven sampling: Respondent-driven sampling (RDS) is a chain-referral method for sampling members of hidden or hard-to-reach populations, such as sex workers, homeless people, or drug users, via their social networks. Most methodological work on RDS has focused on inference of population means under the assumption that subjects’ network degree determines their probability of being sampled. Criticism of existing estimators is usually focused on missing data: the underlying network is only partially observed, so it is difficult to determine correct sampling probabilities. In this article, the author shows that data collected in ordinary RDS studies contain information about the structure of the respondents’ social network. The author constructs a continuous-time model of RDS recruitment that incorporates the time series of recruitment events, the pattern of coupon use, and the network degrees of sampled subjects. Together, the observed data and the recruitment model place a well-defined probability distribution on the recruitment-induced subgraph of respondents. The author shows that this distribution can be interpreted as an exponential random graph model and develops a computationally efficient method for estimating the hidden graph. The author validates the method using simulated data and applies the technique to an RDS study of injection drug users in St. Petersburg, Russia.) <|cite_end|>. However, in contrast to the aforementioned
scenarios where multiple diffusion realizations are available over the same
network, in RDS we can only observe a single realization
due to limited financial, temporal and human resources to conduct the experiments. As a result, network reconstruction from RDS data is particularly challenging and only heuristic algorithms are known.
Crawford <|cite_start|> (Reference: The graphical structure of respondent-driven sampling: Respondent-driven sampling (RDS) is a chain-referral method for sampling members of hidden or hard-to-reach populations, such as sex workers, homeless people, or drug users, via their social networks. Most methodological work on RDS has focused on inference of population means under the assumption that subjects’ network degree determines their probability of being sampled. Criticism of existing estimators is usually focused on missing data: the underlying network is only partially observed, so it is difficult to determine correct sampling probabilities. In this article, the author shows that data collected in ordinary RDS studies contain information about the structure of the respondents’ social network. The author constructs a continuous-time model of RDS recruitment that incorporates the time series of recruitment events, the pattern of coupon use, and the network degrees of sampled subjects. Together, the observed data and the recruitment model place a well-defined probability distribution on the recruitment-induced subgraph of respondents. The author shows that this distribution can be interpreted as an exponential random graph model and develops a computationally efficient method for estimating the hidden graph. The author validates the method using simulated data and applies the technique to an RDS study of injection drug users in St. Petersburg, Russia.) <|cite_end|> assumes that the recruitment time along any recruitment link is exponentially distributed and thus models RDS as a continuous-time diffusion process. Chen et al. <|cite_start|> (Reference: Seeing the Unseen Network: Inferring Hidden Social Ties from Respondent-Driven Sampling: Learning about the social structure of hidden and hard-to-reach populations --- such as drug users and sex workers --- is a major goal of epidemiological and public health research on risk behaviors and disease prevention. Respondent-driven sampling (RDS) is a peer-referral process widely used by many health organizations, where research subjects recruit other subjects from their social network. In such surveys, researchers observe who recruited whom, along with the time of recruitment and the total number of acquaintances (network degree) of respondents. However, due to privacy concerns, the identities of acquaintances are not disclosed. In this work, we show how to reconstruct the underlying network structure through which the subjects are recruited. We formulate the dynamics of RDS as a continuous-time diffusion process over the underlying graph and derive the likelihood for the recruitment time series under an arbitrary recruitment time distribution. We develop an efficient stochastic optimization algorithm called RENDER (REspoNdent-Driven nEtwork Reconstruction) that finds the network that best explains the collected data. We support our analytical results through an exhaustive set of experiments on both synthetic and real data.) <|cite_end|> relaxes the requirement of exponentially distributed recruitment times and extends it to any distribution. Both works use a simulated-annealing-based heuristic in order to find the most likely configuration.
As a general strategy, for a particular diffusion
model, a likelihood function can be derived that measures the probability
of a diffusion realization.
In this way, the network inference problem can be cast
as an optimization problem, in which the researcher seeks the topology
that maximizes the likelihood.
Unfortunately,
the derived likelihood functions
are
usually intractable for
efficient maximization with respect to the graph, and can be computationally prohibitive to evaluate. To address this challenge, approximate
solutions have been proposed as an efficient alternative <|cite_start|> (Reference: Inferring Networks of Diffusion and Influence: Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks. We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.) <|cite_end|>. For instance, Gomez-Rodriguez et al. <|cite_start|> (Reference: Inferring Networks of Diffusion and Influence: Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks. We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.) <|cite_end|>, instead of maximizing the likelihood,
derived an alternative heuristic formulation by considering only the
most likely tree (still an NP-hard problem) rather than all possible propagation trees and showed how a greedy solution can find a near-optimal solution.
It enjoys good empirical results when many realizations of the diffusion process can be observed.
In this paper, we consider the challenging instance of network inference where only one realization of the diffusion process is observed. As a motivating empirical example, we study the network reconstruction problem for RDS data
and propose \Alg (Variational Inference for Network rEconstruction), a computationally efficient variational inference algorithm.
Our major contributions are summarized as follows.
\textbf{Proof of log-submodularity and a variational inference algorithm.}
We show that under a realistic model of RDS diffusion, the likelihood function is
log-submodular. Using variational inference methods, we approximate
the submodular function
with affine
functions and obtain tight upper and lower bounds for the partition function.
We then estimate the most probable network configuration, which is
the maximizer of the likelihood,
as well as the marginal probability of each edge.
\textbf{Relaxation of constraints.}
The optimization problem of the RDS likelihood (as shown later) is constrained. First, the observed diffusion
results in a directed subgraph
and the inferred network must be a supergraph of the diffusion process. Second, for each subject,
their degree in the reconstructed subgraph cannot exceed their total network degree.
The first constraint is easy to incorporate while the second precludes efficient computation of partition functions of the likelihood (or any linear approximations).
We address this challenge by introducing penalty terms in the objective function. This way, the constrained reconstruction problem becomes unconstrained and admits the use of variational methods.
\textbf{Flexibility for possibly inexact reported degrees.}
One may not assume that the reported degrees by recruited subjects are exact
because subjects may not be able to accurately recall the number of people they know who are members of the target population.
We would like to note that the aforementioned relaxation of the second constraint
allows for more flexibility of the possible mismatch of the reported degrees from the true ones by introducing an additional term that penalizes the deviation between the reported and true degrees, seeking to preserve the relative accuracy of the reported degrees.
\textbf{High reconstruction performance and time efficiency using a single realization of RDS diffusion.} As shown by our experiments, \Alg achieves significantly higher inference performance while running orders of magnitude faster. We should note that the very accurate inference is achieved based on the observation of a single diffusion realization. This is in sharp contrast to previous work that assumes multiple diffusion realizations.
The rest of the paper is organized as follows.
In \cref{sec:Network-Reconstruction-for},
we focus on network reconstruction for RDS data and formulate
it
as an optimization problem. We present our method in \cref{sec:proposed_method}. Experimental results
are presented in \cref{sec:Experiment}. All proofs are presented in \cref{sub:likeli,sub:logl,sub:proofthm,sub:lower,sub:sup}.
Additionally,
we discuss the connection between RDS and other diffusion processes in \cref{sec:discuss}. <|paper_end|> | [
"<|reference_start|> {The Link-prediction Problem for Social Networks: Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. <|reference_end|>",
"<|reference_start|> Network completion and survey sampling: We study the problem of learning the topology of an undirected network by observing a random subsample. Specifically, the sample is chosen by randomly selecting a fixed number of vertices, and for each we are allowed to observe all edges it is incident with. We analyze a general formalization of learning from such samples, and derive confidence bounds on the number of differences between the true and learned topologies, as a function of the number of observed mistakes and the algorithm’s bias. In addition to this general analysis, we also analyze a variant of the problem under a stochastic block model assumption. <|reference_end|>",
"<|reference_start|> Inferring Networks of Diffusion and Influence: Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks. We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them. <|reference_end|>",
"<|reference_start|> Inferring Networks of Diffusion and Influence: Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks. We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them. <|reference_end|>"
] | [
4,
8,
17,
18
] | {"<|multi_cite_1_1|>": "arxiv-13897", "<|multi_cite_1_2|>": "arxiv-60773", "<|multi_cite_1_3|>": "ss-1663940", "<|multi_cite_1_4|>": "arxiv-72103", "<|multi_cite_1_5|>": "ss-1188243", "<|multi_cite_1_6|>": "arxiv-56455", "<|multi_cite_1_8|>": "ss-1294269", "<|multi_cite_1_9|>": "arxiv-19588", "<|multi_cite_1_10|>": "ss-2389259", "<|multi_cite_1_11|>": "ss-1953061", "<|cite_2|>": "ss-1968051", "<|cite_3|>": "ss-1263966", "<|cite_4|>": "ss-1016007", "<|cite_5|>": "ss-2238093", "<|cite_6|>": "ss-1162082", "<|cite_7|>": "ss-1162082", "<|cite_8|>": "arxiv-87098", "<|multi_cite_9_1|>": "arxiv-13897", "<|cite_10|>": "arxiv-13897"} |
2012.14922 | <|paper_start|> Title: A Number Theoretic Approach for Fast Discovery of Single-Hop Wireless Networks
Abstract: A Number Theoretic Approach for Fast Discovery of Single-Hop Wireless Networks: Interference management has become a key factor in regulating transmissions in wireless communication networks. To support effective interference management schemes, it can be essential to have prior knowledge about the network topology. In this paper, we build on existing results in the literature on the simulation of the message passing model, and present an efficient strategy for fast discovery of the network topology during a pilot communication phase. More precisely, we investigate the minimum number of communication rounds that is needed to discover an arbitrary network topology with a maximum number of links per receiver, while assuming a single-hop network that is restricted to interference-avoidance based schemes in its pilot phase. We first ignore any interference cancellation strategy such that no receiver can recognize, and cancel transmissions of, previously discovered transmitters, and then capture the gains obtained through interference cancellation during the pilot phase. Our results evince how the required number of rounds scale in an approximately logarithmic fashion with practical values of the total number of users in the network, having a slope proportional to the number of interfering transmitters per receiver.
Introduction
With the increasing demand on wireless networks having strict Quality of Service (QoS) requirements, as well as the rapidly increasing network sizes, specially with the emergence of Internet of Things (IoT) applications, managing interference to deliver satisfactory performance has become more challenging. However, most effective interference management schemes that deliver significant and scalable performance gains require at least knowledge of the network topology, if not also the channel state information. At the same time, the feasibility of cloud-based centralized control of large network nodes offer unprecedented interference management opportunities <|cite_start|> (Reference: Interference Management in Wireless Networks: Fundamental Bounds and the Role of Cooperation: A toner composition comprised of binder, colorant, and a surface additive of a coated silica and wherein said silica possesses a BET surface area, in m2/g of from about 35 to about 65, a bulk density, in grams/liter, of from about 40 to about 60, and and wherein the size diameter determined from the BET measurement is from about 20 to about 100 nanometers.) <|cite_end|>. In this work, we capitalize on this new paradigm to introduce a novel algorithm for fast discovery of the network topology, as well as making the channel state information available at the receivers. The proposed approach relies on a well-founded number-thereotic approach that enables interference-avoidance-based network discovery to complete in a number of communication rounds that scales logarithmically with the network size.
It is important to note that even though network discovery is primarily useful for interference management, it can also be essential for core network tasks such as fault management <|cite_start|> (Reference: Topology discovery for network fault management using mobile agents in ad-hoc networks: Managing today's complex and increasingly heterogeneous networks requires in-depth knowledge and extensive training as well as collection of very large amount of data. Fault management is one of the functional areas of network management that entails detection, identification and correction of anomalies that disrupt services of a network. The task of fault management is even harder in ad-hoc networks where the topology of the network changes frequently. It is very inefficient if not impossible to discover the ad-hoc network topology using traditional practices of network discovery. We propose a mobile multi agent system for topology discovery that will allow fault management functions in ad-hoc network. Comparison to current mobile agent based topology discovery systems is also presented) <|cite_end|>. Further, it can be particularly useful in Multiple Radio Access Technology (Multi RAT) infrastructural networks, as is the case with the cellular/WIFI network considered in <|cite_start|> (Reference: Fast Detection of Compact Topology Representation for Wireless Networks: This paper considers a hybrid cellular architecture in which mobiles can communicate with others in their vicinity, e.g. using 802.11 interface, in addition to the base stations of the cellular network. Such an architecture can aid device-to-device communication as well as assist critical tasks of cellular networks such as mobility management, content caching and relaying. In order to enable these capabilities, base stations need to have sufficient knowledge of the underlying network topology induced by the 802.11 links of the mobiles. Due to the dynamic nature of this network, a compressed snapshot of its topology should be collected within a very short time duration and with minimal communication among mobiles. Addressing this need, we propose a compact topology representation that is suitable for a number of applications. We utilize the broadcast nature of wireless channels to design an efficient topology detection algorithm that acquires a compact representation of the underlying network (at most 3N links and a low `stretch' factor for the N mobiles) within a short duration (10s of ms). Our scheme does not have collision resolution, backoffs or any of the other MAC layer inefficiencies.) <|cite_end|>.
The network discovery problem has been typically studied in a Device-to-Device (D2D) setting that allows devices to communicate directly without using infrastructural base stations. The infrastructure has been mainly considered to assist D2D network discovery, as in <|cite_start|> (Reference: Design and Evaluation of a Hybrid D2D Discovery Mechanism in 5G Cellular Networks: Device-to-device (D2D) communications allow devices to communicate directly without using base stations for relaying, yielding several merits such as increasing the spectral efficiency and system capacity, reducing network latency and power consumption of UEs, offloading the 5G cellular networks, and extending the network coverage. However, to realize D2D communications, the first challenging issue is how to efficiently find target devices in proximity for communications. Therefore, in this paper we intend to design a hybrid D2D discovery mechanism and evaluate its performance. Simulation results demonstrate that the proposed D2D discovery mechanism achieves better performance in terms of power consumption and discovery time, compared with the direct discovery scheme that purely utilizes the contention-based transmission scheme and operates in the unlicensed band.) <|cite_end|>. In <|cite_start|> (Reference: Learning the Interference Graph of a Wireless Network: A key challenge in wireless networking is the management of interference between transmissions. Identifying which transmitters interfere with each other is a crucial first step. In this paper we cast the task of estimating the a wireless interference environment as a graph learning problem. Nodes represent transmitters and edges represent the presence of interference between pairs of transmitters. We passively observe network traffic transmission patterns and collect information on transmission successes and failures. We establish bounds on the number of observations (each a snapshot of a network traffic pattern) required to identify the interference graph reliably with high probability. Our main results are scaling laws that tell us how the number of observations must grow in terms of the total number of nodes $n$ in the network and the maximum number of interfering transmitters $d$ per node (maximum node degree). The effects of hidden terminal interference (i.e., interference not detectable via carrier sensing) on the observation requirements are also quantified. We show that to identify the graph it is necessary and sufficient that the observation period grows like $d^2 \log n$, and we propose a practical algorithm that reliably identifies the graph from this length of observation. The observation requirements scale quite mildly with network size, and networks with sparse interference (small $d$) can be identified more rapidly. Computational experiments based on a realistic simulations of the traffic and protocol lend additional support to these conclusions.) <|cite_end|>, the network discovery problem was considered from a graph-theoretic perspective. A protocol was then introduced, capitalizing on channel sensing, random backoff, and tackling the hidden terminal interference issue, and probabilistic guarantees were provided for complete discovery of the interference graph.
In this work, we restrict our attention to a single-hop network with $K$ transmitter/receiver pairs, where a coordinated interference-avoidance scheduling strategy is orchestrated by a cloud-based controller to enable each receiver to discover up to $L$ transmitters that could be connected to it. This considered model draws its roots from the information-theoretic $K$-user binary interference channel model with path loss constraints, as well as recent advances rendering the feasibility of cloud-based wireless communications. We rely on a well-investigated property of prime number residuals to design the proposed scheduling strategy, such that after a number of communication rounds with asymptotic complexity of $O\left(\frac{L^2 \log^2 K}{\log L \log K}\right)$, network discovery is guaranteed to successfully complete, regardless of the particular connectivity pattern. Further, provided empirical evidence suggests that for practical values of $K$ and $L$, the needed number of communication rounds scales linearly with $\log K$ with a slope proportional to $L$. We believe that this is a promising result that opens the door for supporting new applications via next generation cloud-based wireless networks. <|paper_end|> | [
"<|reference_start|> Topology discovery for network fault management using mobile agents in ad-hoc networks: Managing today's complex and increasingly heterogeneous networks requires in-depth knowledge and extensive training as well as collection of very large amount of data. Fault management is one of the functional areas of network management that entails detection, identification and correction of anomalies that disrupt services of a network. The task of fault management is even harder in ad-hoc networks where the topology of the network changes frequently. It is very inefficient if not impossible to discover the ad-hoc network topology using traditional practices of network discovery. We propose a mobile multi agent system for topology discovery that will allow fault management functions in ad-hoc network. Comparison to current mobile agent based topology discovery systems is also presented <|reference_end|>",
"<|reference_start|> Fast Detection of Compact Topology Representation for Wireless Networks: This paper considers a hybrid cellular architecture in which mobiles can communicate with others in their vicinity, e.g. using 802.11 interface, in addition to the base stations of the cellular network. Such an architecture can aid device-to-device communication as well as assist critical tasks of cellular networks such as mobility management, content caching and relaying. In order to enable these capabilities, base stations need to have sufficient knowledge of the underlying network topology induced by the 802.11 links of the mobiles. Due to the dynamic nature of this network, a compressed snapshot of its topology should be collected within a very short time duration and with minimal communication among mobiles. Addressing this need, we propose a compact topology representation that is suitable for a number of applications. We utilize the broadcast nature of wireless channels to design an efficient topology detection algorithm that acquires a compact representation of the underlying network (at most 3N links and a low `stretch' factor for the N mobiles) within a short duration (10s of ms). Our scheme does not have collision resolution, backoffs or any of the other MAC layer inefficiencies. <|reference_end|>",
"<|reference_start|> Design and Evaluation of a Hybrid D2D Discovery Mechanism in 5G Cellular Networks: Device-to-device (D2D) communications allow devices to communicate directly without using base stations for relaying, yielding several merits such as increasing the spectral efficiency and system capacity, reducing network latency and power consumption of UEs, offloading the 5G cellular networks, and extending the network coverage. However, to realize D2D communications, the first challenging issue is how to efficiently find target devices in proximity for communications. Therefore, in this paper we intend to design a hybrid D2D discovery mechanism and evaluate its performance. Simulation results demonstrate that the proposed D2D discovery mechanism achieves better performance in terms of power consumption and discovery time, compared with the direct discovery scheme that purely utilizes the contention-based transmission scheme and operates in the unlicensed band. <|reference_end|>",
"<|reference_start|> Learning the Interference Graph of a Wireless Network: A key challenge in wireless networking is the management of interference between transmissions. Identifying which transmitters interfere with each other is a crucial first step. In this paper we cast the task of estimating the a wireless interference environment as a graph learning problem. Nodes represent transmitters and edges represent the presence of interference between pairs of transmitters. We passively observe network traffic transmission patterns and collect information on transmission successes and failures. We establish bounds on the number of observations (each a snapshot of a network traffic pattern) required to identify the interference graph reliably with high probability. Our main results are scaling laws that tell us how the number of observations must grow in terms of the total number of nodes $n$ in the network and the maximum number of interfering transmitters $d$ per node (maximum node degree). The effects of hidden terminal interference (i.e., interference not detectable via carrier sensing) on the observation requirements are also quantified. We show that to identify the graph it is necessary and sufficient that the observation period grows like $d^2 \\log n$, and we propose a practical algorithm that reliably identifies the graph from this length of observation. The observation requirements scale quite mildly with network size, and networks with sparse interference (small $d$) can be identified more rapidly. Computational experiments based on a realistic simulations of the traffic and protocol lend additional support to these conclusions. <|reference_end|>"
] | [
1,
2,
3,
4
] | {"<|cite_1|>": "ss-1059986", "<|cite_2|>": "ss-1059987", "<|cite_3|>": "ss-1059988", "<|cite_4|>": "ss-1059989", "<|cite_5|>": "arxiv-34867"} |
2107.12220 | <|paper_start|> Title: Thought Flow Nets: From Single Predictions to Trains of Model Thought
Abstract: Thought Flow Nets: From Single Predictions to Trains of Model Thought: When humans solve complex problems, they typically create a sequence of ideas (involving an intuitive decision, reflection, error correction, etc.) in order to reach a conclusive decision. Contrary to this, today's models are mostly trained to map an input to one single and fixed output. In this paper, we investigate how we can give models the opportunity of a second, third and $k$-th thought. Taking inspiration from Hegel's dialectics, we propose the concept of a thought flow which creates a sequence of predictions. We present a self-correction mechanism that is trained to estimate the model's correctness and performs iterative prediction updates based on the correctness prediction's gradient. We introduce our method at the example of question answering and conduct extensive experiments that demonstrate (i) our method's ability to correct its own predictions and (ii) its potential to notably improve model performances. In addition, we conduct a qualitative analysis of thought flow correction patterns and explore how thought flow predictions affect human users within a crowdsourcing study. We find that (iii) thought flows enable improved user performance and are perceived as more natural, correct, and intelligent as single and/or top-3 predictions.
Introduction
\label{sec:intro}
One direction of machine learning, a subfield of artificial intelligence, is the development of models
that reflect human thinking <|cite_start|> (Reference: An Introduction: Global civilization is undergoing great change. This process of rethinking and rebirth, driven by the bankruptcy of modern culture, will eventually recast every endeavor from business, education, and politics to health, spirituality, and science. Yet, although the reforms needed to make this transition are already being developed all over the world and in every field imaginable, its magnitude is invisible because the solutions that could save us are disjoint. Integral science creates a rigorous, yet commonsense framework capable of uniting Integral Civilization. The key to this synthesis lies in its more harmonious alignment of head, heart, and hands.) <|cite_end|>.
However, the aspect of forming decisions is still fundamentally different in humans and machine learning models:
Classification models, on the one hand, are defined to map a specific input $\vec{x}$ to an output label $\hat{y}$ <|cite_start|> (Reference: Pattern Recognition and Machine learning: Artificial intelligence, robotics, and machine learning are not futuristic dreams anymore. The early consequences of these technologies are upon us already. Industrial robots, self-driving cars, an...) <|cite_end|>.
This mapping $\vec{x} \rightarrow \hat{y}$ might involve various modulations and abstractions of $\vec{x}$ in a latent space, e.g., hidden layers of a neural network, but typically does not allow variations or trajectories of $\hat{y}$.
We argue that humans, on the other hand, rarely come to a decision right-away but rather follow a complex thought process which involves reflecting on initial decisions, comparing different hypotheses, resolving contradictions, etc.
While humans' trains of thought are extensively studied in cognitive sciences and philosophy,
such theories are rarely explored in machine learning.
An example of a philosophical model is Hegel's dialectics <|cite_start|> (Reference: Hegel’s {Dialectics: “Dialectics” is a term used to describe a method of philosophical argument that involves some sort of contradictory process between opposing sides. In what is perhaps the most classic version of “dialectics”, the ancient Greek philosopher, Plato (see entry on Plato), for instance, presented his philosophical argument as a back-and-forth dialogue or debate, generally between the character of Socrates, on one side, and some person or group of people to whom Socrates was talking (his interlocutors), on the other. In the course of the dialogues, Socrates’ interlocutors propose definitions of philosophical concepts or express views that Socrates challenges or opposes. The back-and-forth debate between opposing sides produces a kind of linear progression or evolution in philosophical views or positions: as the dialogues go along, Socrates’ interlocutors change or refine their views in response to Socrates’ challenges and come to adopt more sophisticated views. The back-and-forth dialectic between Socrates and his interlocutors thus becomes Plato’s way of arguing against the earlier, less sophisticated views or positions and for the more sophisticated ones later.) <|cite_end|> from which we take inspiration in this paper.
In particular, we present a method that turns the probability distribution for the output classes into a sequence of inter-dependent probability distributions --- we call it the \textit{thought flow}.
To be more specific, we formalize the three \textit{moments} of Hegel's dialectics in terms of forward and backward passes in a novel, yet simple \textit{correction module}, that can be used on top of
an existing classifier and is trained to judge whether the predicted class distribution corresponds to a correct prediction.
This architecture yields a dynamic system over probability distributions that starts with the original classifier's class prediction and iteratively updates the probability distribution into a direction of higher self-estimated correctness using the gradient of the correctness probability with respect to the class distribution.
We demonstrate our method's ability to correct misclassifications and find improvements in classification accuracy across various tasks and datasets from computer vision (CIFAR-10 and CIFAR-100 <|cite_start|> (Reference: Learning Multiple Languages: ) <|cite_end|>) and natural language processing (MNLI <|cite_start|> (Reference: A broad scope, a broad readership, and a broad purpose: ) <|cite_end|> and SST-5 <|cite_start|> (Reference: Cortex: A Compiler for Recursive Deep Learning Models: Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries. This approach often leaves significant performance on the table, especially for the case of recursive deep learning models. In this paper, we present Cortex, a compiler-based approach to generate highly-efficient code for recursive models for low latency inference. Our compiler approach and low reliance on vendor libraries enables us to perform end-to-end optimizations, leading to up to 14X lower inference latencies over past work, across different backends.) <|cite_end|>).
Further, we assess our approach's sensitivity to adversarial attacks and find that its correction performance is improved under strong Fast Gradient Sign Attacks <|cite_start|> (Reference: Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.) <|cite_end|>.
Additionally, we evaluate our method in a simulated label-distribution-shift setting and observe mean accuracy improvements over 2\% and single-model improvements over 4\% accuracy.
Besides effects on model performance, we observe surprisingly complex dynamics in a qualitative analysis.
Finally, we discuss our method as a tool for model interpretability and elaborate on the additional information it provides over single-prediction classifiers.
To sum up, our contributions are (i) a formalization of self-reflection and self-correction inspired from Hegel's dialectics, (ii) a novel correction module and a corresponding update scheme to generate a thought flow, (iii) extensive experiments that demonstrate its positive effects on classification performance as well as its robustness to adversarial attacks and label-distribution shifts, (iv) an exploration of
patterns of the thought flow's dynamics and its utility as a tool for model interpretability.
Related Work
\label{sec:background}
In this section, we discuss preliminaries and notations and present a background on Hegel's dialectics.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/overview.pdf}
\caption{Architecture overview. After encoding the input, the initial class prediction is obtained with the label module (upper part, orange). The prediction as well as the encoding are then passed to the correction module (lower part, blue), which assesses whether the class prediction is correct.}
\label{fig:overview}
\end{figure}
\subsection{Preliminaries and notation}\label{sec:base_model_and_notation}
Throughout the paper, we denote $\vec{x}$ as the input vector to a neural network, $\phi(\vec{x}) \in \mathbb{R}^d$
(with $d$ being a hyperparameter)
as an intermediate feature representation (e.g., the penultimate or antepenultimate layer activations of a multi-layer classification model), and $\vec{\hat{y}}$ as the output class probabilities. In general, those probabilities are obtained by mapping the input encoding $\phi(\vec{x})$ to label logits $\vec{\hat{z}} \in \mathbb{R}^c$, with $c$ being the number of classes, and applying a softmax function: $\vec{\hat{y}} := \text{softmax}(\vec{\hat{z}})$.
We refer to the network part that takes the input $\vec{x}$ and computes the feature representation $\phi(\vec{x})$ as \textit{encoder} and the network part that obtains the class probabilities as \textit{label module}. We denote the mapping function of the label module as $f_{\text{label}}: \mathbb{R}^d \rightarrow \mathbb{R}^c$.
In our experiments, we compose the label module of two blocks, each consisting of a scaled exponential linear unit activation (SELU) <|cite_start|> (Reference: Self-Normalizing Path Integrals: ) <|cite_end|> followed by a fully-connected layer (FC) as shown in the orange block in \Cref{fig:overview}.
\subsection{Hegel’s dialectics}\label{sec:hegel}
In order to design a method that generates sequences of consecutive, interdependent predictions, we take inspiration from philosophy, concretely from Hegel's dialectics which describes how views or definitions of concepts evolve in a process of contradictions.
Besides its philosophical relevance, Hegel's dialectics has been related to various fields before, such as cognitive sciences (e.g., <|cite_start|> (Reference: Dialectic operations: the final period of cognitive development.: Arguments for an extension of Piaget's theory of cognitive development have been derived from philosophical and historical consideration of modern natural sciences. Implicit contradictions, which characterize these sciences as well as common thought, can be systematically apprehended only through a dialectic reinterpretation. The dialectic basis of Piaget's theory is expressed in his assimilation-accommodation paradigm. But development is interpreted as a continuing alienation from this basis culminating in the noncontradictory thinking of formal operations. Although Piaget's interpretations capture a rich variety of performances during childhood they fail to represent adequately the thought and emotions of mature and creative persons. For an interpretation of adulthood and aging, a return to the dialectic basis is necessary. Such a reorganization can proceed from any of the four major levels of development. It introduces intra- and interindividual variations into Piaget's theory. Individuals may operate simultaneously or in short succession at different cognitive levels. The ceaseless striving toward formal operations becomes inappropriate and ineffective for the level of dialectic maturity.) <|cite_end|>), neuroscience (e.g., <|cite_start|> (Reference: The dialectics of free energy minimization: Karl Friston’s free energy minimization has been received with great enthusiasm. With good reason: it not only makes the bold claim to a unifying theory of the brain, but it is presented as an a priori principle applicable to living systems in general. In this article, we set out to show how the breadth of scope of Friston’s framework converges with the dialectics of Georg Hegel. Through an appeal to the work of Catherine Malabou, we aim to demonstrate how Friston not only reinvigorates Hegelian dialectics from the perspective of neuroscience, but that the implicit alignment with Hegel necessitates a reading of free energy minimization from the perspective of Hegel’s speculative philosophy. It is this reading that moves beyond the discussion between cognitivism and enactivism surrounding Friston’s framework; beyond the question whether the organism is a secluded entity separated from its surroundings, or whether it is a dynamical system characterized by perpetual openness and mutual exchange. From a Hegelian perspective, it is the tension between both positions itself that is operative at the level of the organism; as a contradiction the organism sustains over the course of its life. Not only does the organism’s secluded existence depend on a perpetual relation with its surroundings, but the condition for there to be such a relation is the existence of a secluded entity. We intend to show how this contradiction—tension internalized—is at the center of Friston’s anticipatory organism; how it is this contradiction that grounds the perpetual process of free energy minimization.) <|cite_end|>) or optimization (e.g., <|cite_start|> (Reference: Hyper-parameterized Dialectic Search for Non-linear Box-Constrained Optimization with Heterogenous Variable Types: ) <|cite_end|>).
\paragraph{Three moments of dialectics}
In his dialectical method, Hegel distinguishes three moments:
the \textit{moment of understanding}, the \textit{dialectical moment} and the \textit{speculative moment}.\footnote{These moments are often compared to the thesis-antithesis-synthesis triad, which was popularized by Heinrich Moritz Chalybäus, but cannot necessarily be equated to it as argued by, e.g., Mueller in <|cite_start|> (Reference: The Hegel Dictionary: ) <|cite_end|>.}
The moment of understanding refers to the initial, ``seemingly stable'' view or definition.
In the second moment, this supposed stability is lost due to the definition's one-sidedness or restrictedness and the initial determination \textit{sublates} itself into its own negation.
The speculative moment unifies the first two determinations by negating the contradiction <|cite_start|> (Reference: Hegel’s {Dialectics: “Dialectics” is a term used to describe a method of philosophical argument that involves some sort of contradictory process between opposing sides. In what is perhaps the most classic version of “dialectics”, the ancient Greek philosopher, Plato (see entry on Plato), for instance, presented his philosophical argument as a back-and-forth dialogue or debate, generally between the character of Socrates, on one side, and some person or group of people to whom Socrates was talking (his interlocutors), on the other. In the course of the dialogues, Socrates’ interlocutors propose definitions of philosophical concepts or express views that Socrates challenges or opposes. The back-and-forth debate between opposing sides produces a kind of linear progression or evolution in philosophical views or positions: as the dialogues go along, Socrates’ interlocutors change or refine their views in response to Socrates’ challenges and come to adopt more sophisticated views. The back-and-forth dialectic between Socrates and his interlocutors thus becomes Plato’s way of arguing against the earlier, less sophisticated views or positions and for the more sophisticated ones later.) <|cite_end|>.
In the following, we present our proposed formalization of these three moments with respect to classification models. <|paper_end|> | [
"<|reference_start|> Hegel’s {Dialectics: “Dialectics” is a term used to describe a method of philosophical argument that involves some sort of contradictory process between opposing sides. In what is perhaps the most classic version of “dialectics”, the ancient Greek philosopher, Plato (see entry on Plato), for instance, presented his philosophical argument as a back-and-forth dialogue or debate, generally between the character of Socrates, on one side, and some person or group of people to whom Socrates was talking (his interlocutors), on the other. In the course of the dialogues, Socrates’ interlocutors propose definitions of philosophical concepts or express views that Socrates challenges or opposes. The back-and-forth debate between opposing sides produces a kind of linear progression or evolution in philosophical views or positions: as the dialogues go along, Socrates’ interlocutors change or refine their views in response to Socrates’ challenges and come to adopt more sophisticated views. The back-and-forth dialectic between Socrates and his interlocutors thus becomes Plato’s way of arguing against the earlier, less sophisticated views or positions and for the more sophisticated ones later. <|reference_end|>",
"<|reference_start|> Cortex: A Compiler for Recursive Deep Learning Models: Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries. This approach often leaves significant performance on the table, especially for the case of recursive deep learning models. In this paper, we present Cortex, a compiler-based approach to generate highly-efficient code for recursive models for low latency inference. Our compiler approach and low reliance on vendor libraries enables us to perform end-to-end optimizations, leading to up to 14X lower inference latencies over past work, across different backends. <|reference_end|>",
"<|reference_start|> Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset. <|reference_end|>",
"<|reference_start|> Hyper-parameterized Dialectic Search for Non-linear Box-Constrained Optimization with Heterogenous Variable Types: <|reference_end|>"
] | [
2,
5,
6,
10
] | {"<|cite_1|>": "ss-1133770", "<|cite_2|>": "ss-793106", "<|cite_3|>": "ss-900794", "<|cite_4|>": "ss-1295421", "<|cite_5|>": "ss-1931218", "<|cite_6|>": "ss-900795", "<|cite_7|>": "ss-700939", "<|cite_8|>": "ss-900796", "<|cite_9|>": "ss-900797", "<|cite_10|>": "ss-900798", "<|cite_11|>": "ss-2519142", "<|cite_12|>": "ss-900799", "<|cite_13|>": "ss-900794"} |
2308.12329-1 | use a combination of examples and natural language to speed inference of
a constrained set of regular expressions. Internally, their system
generates an ``h-sketch'' as an intermediate result.
These h-sketches are partially-defined regular expressions that may include holes for
unknown regular expressions. Such h-sketches play a similar role to our
metagrammars: they denote sets of possible regular expressions and constrain
the search space for grammatical inference. However, our language is an
extension of YACC and is designed for humans, rather than being an
intermediate language. Furthermore, our metagrammars may be
reused, like libraries, across data sets. In constrast, each h-sketch is generated and used only once inside a compiler pipeline.
Related to the notion of grammatical inference is that of expression \emph{repair}. \RFixer <|cite_start|> (Reference: {Automatic repair of regular expressions: We introduce RFixer, a tool for repairing complex regular expressions using examples and only consider regular expressions without non-regular operators (e.g., negative lookahead). Given an incorrect regular expression and sets of positive and negative examples, RFixer synthesizes the closest regular expression to the original one that is consistent with the examples. Automatically repairing regular expressions requires exploring a large search space because practical regular expressions: i) are large, ii) operate over very large alphabets---e.g., UTF-16 and ASCII---and iii) employ complex constructs---e.g., character classes and numerical quantifiers. RFixer's repair algorithm achieves scalability by taking advantage of structural properties of regular expressions to effectively prune the search space, and it employs satisfiability modulo theory solvers to efficiently and symbolically explore the sets of possible character classes and numerical quantifiers. RFixer could successfully compute minimal repairs for regular expressions collected from a variety of sources, whereas existing tools either failed to produce any repair or produced overly complex repairs.) <|cite_end|>uses positive and negative examples to fix erroneous regular expressions. Both \RFixer{} and Saggitarius use similar algorithms to ensure positive examples are in the generated language, and negative examples are not. Both of these tools encode these constraints as MaxSMT formulas to ensure the generated grammars are optimal. Because RFixer does not have a metagrammar to orient the search, their constraints can only help find character sets that distinguish between the grammars. Saggitarius permits any constraints that expressible in propositional logic, and the constraints can be over arbitrary productions, not merely character sets. One could see the RFixer algorithm as an instance of our algorithm, where the meta-grammar constrains sets of allowed characters.
\paragraph*{Syntax-guided Program Synthesis} Our work was inspired by the progress on
syntax-guided program synthesis over the past decade or so <|cite_start|> (Reference: Programming by Sketching for Bit-streaming Programs: This paper introduces the concept of programming with sketches, an approach for the rapid development of high-performance applications. This approach allows a programmer to write clean and portable reference code, and then obtain a high-quality implementation by simply sketching the outlines of the desired implementation. Subsequently, a compiler automatically fills in the missing details while also ensuring that a completed sketch is faithful to the input reference code. In this paper, we develop StreamBit as a sketching methodology for the important class of bit-streaming programs (e.g., coding and cryptography).A sketch is a partial specification of the implementation, and as such, it affords several benefits to programmer in terms of productivity and code robustness. First, a sketch is easier to write compared to a complete implementation. Second, sketching allows the programmer to focus on exploiting algorithmic properties rather than on orchestrating low-level details. Third, a sketch-aware compiler rejects "buggy" sketches, thus improving reliability while allowing the programmer to quickly evaluate sophisticated implementation ideas.We evaluated the productivity and performance benefits of our programming methodology in a user-study, where a group of novice StreamBit programmers competed with a group of experienced C programmers on implementing a cipher. We learned that, given the same time budget, the ciphers developed in StreamBit ran 2.5x faster than ciphers coded in C. We also produced implementations of DES and Serpent that were competitive with hand optimized implementations available in the public domain.) <|cite_end|> <|cite_start|> (Reference: {Combinatorial sketching for finite programs: Sketching is a software synthesis approach where the programmer develops a partial implementation - a sketch - and a separate specification of the desired functionality. The synthesizer then comple...) <|cite_end|> <|cite_start|> (Reference: {Search-Based Program Synthesis: A promising, useful tool for future programming development environments.) <|cite_end|>. Much of that
work has focused on data transformations, including
spreadsheet manipulation <|cite_start|> (Reference: Synthesis of Data Completion Scripts using Finite Tree Automata: In application domains that store data in a tabular format, a common task is to fill the values of some cells using values stored in other cells. For instance, such data completion tasks arise in the context of missing value imputation in data science and derived data computation in spreadsheets and relational databases. Unfortunately, end-users and data scientists typically struggle with many data completion tasks that require non-trivial programming expertise. This paper presents a synthesis technique for automating data completion tasks using programming-by-example (PBE) and a very lightweight sketching approach. Given a formula sketch (e.g., AVG($?_1$, $?_2$)) and a few input-output examples for each hole, our technique synthesizes a program to automate the desired data completion task. Towards this goal, we propose a domain-specific language (DSL) that combines spatial and relational reasoning over tabular data and a novel synthesis algorithm that can generate DSL programs that are consistent with the input-output examples. The key technical novelty of our approach is a new version space learning algorithm that is based on finite tree automata (FTA). The use of FTAs in the learning algorithm leads to a more compact representation that allows more sharing between programs that are consistent with the examples. We have implemented the proposed approach in a tool called DACE and evaluate it on 84 benchmarks taken from online help forums. We also illustrate the advantages of our approach by comparing our technique against two existing synthesizers, namely PROSE and SKETCH.) <|cite_end|> <|cite_start|> (Reference: FlashRelate: extracting relational data from semi-structured spreadsheets using examples: With hundreds of millions of users, spreadsheets are one of the most important end-user applications. Spreadsheets are easy to use and allow users great flexibility in storing data. This flexibility comes at a price: users often treat spreadsheets as a poor man's database, leading to creative solutions for storing high-dimensional data. The trouble arises when users need to answer queries with their data. Data manipulation tools make strong assumptions about data layouts and cannot read these ad-hoc databases. Converting data into the appropriate layout requires programming skills or a major investment in manual reformatting. The effect is that a vast amount of real-world data is "locked-in" to a proliferation of one-off formats. We introduce FlashRelate, a synthesis engine that lets ordinary users extract structured relational data from spreadsheets without programming. Instead, users extract data by supplying examples of output relational tuples. FlashRelate uses these examples to synthesize a program in Flare. Flare is a novel extraction language that extends regular expressions with geometric constructs. An interactive user interface on top of FlashRelate lets end users extract data by point-and-click. We demonstrate that correct Flare programs can be synthesized in seconds from a small set of examples for 43 real-world scenarios. Finally, our case study demonstrates FlashRelate's usefulness addressing the widespread problem of data trapped in corporate and government formats.) <|cite_end|>,
string transformations <|cite_start|> (Reference: Synthesizing Symmetric Lenses: Lenses are programs that can be run both "front to back" and "back to front," allowing updates to either their source or their target data to be transferred in both directions. Lenses have been extensively studied, extended, and applied. Recent work has demonstrated how techniques from type-directed program synthesis can be used to efficiently synthesize a simple class of lenses---bijective lenses over string data---given a pair of types (regular expressions) and examples. We extend this synthesis algorithm to a broader class of lenses, called simple symmetric lenses, including all bijective lenses, all of the popular category of "asymmetric" lenses, and a subset of the "symmetric lenses" proposed by Hofmann et al. Intuitively, simple symmetric lenses allow some information to be present on one side but not the other and vice versa. They are of independent theoretical interest, being the largest class of symmetric lenses that do not use persistent internal state. Synthesizing simple symmetric lenses is more challenging than synthesizing bijective lenses: Since some of the information on each side can be "disconnected" from the other side, there will typically be many lenses that agree with a given example. To guide the search process, we use stochastic regular expressions and information theory to estimate the amount of information propagated by a candidate lens, preferring lenses that propagate more information, as well as user annotations marking parts of the source and target formats as either irrelevant or essential. We describe an implementation of simple symmetric lenses and our synthesis procedure as extensions to the Boomerang language. We evaluate its performance on 48 benchmark examples drawn from Flash Fill, Augeas, and the bidirectional programming literature. Our implementation can synthesize each of these lenses in under 30 seconds.) <|cite_end|> <|cite_start|> (Reference: Synthesizing Bijective Lenses: Bidirectional transformations between different data representations occur frequently in modern software systems. They appear as serializers and deserializers, as database views and view updaters, and more. Manually building bidirectional transformations---by writing two separate functions that are intended to be inverses---is tedious and error prone. A better approach is to use a domain-specific language in which both directions can be written as a single expression. However, these domain-specific languages can be difficult to program in, requiring programmers to manage fiddly details while working in a complex type system. To solve this, we present Optician, a tool for type-directed synthesis of bijective string transformers. The inputs to Optician are two ordinary regular expressions representing two data formats and a few concrete examples for disambiguation. The output is a well-typed program in Boomerang (a bidirectional language based on the theory of lenses). The main technical challenge involves navigating the vast program search space efficiently enough. Unlike most prior work on type-directed synthesis, our system operates in the context of a language with a rich equivalence relation on types (the theory of regular expressions). We synthesize terms of a equivalent language and convert those generated terms into our lens language. We prove the correctness of our synthesis algorithm. We also demonstrate empirically that our new language changes the synthesis problem from one that admits intractable solutions to one that admits highly efficient solutions. We evaluate Optician on a benchmark suite of 39 examples including both microbenchmarks and realistic examples derived from other data management systems including Flash Fill, a tool for synthesizing string transformations in spreadsheets, and Augeas, a tool for bidirectional processing of Linux system configuration files.) <|cite_end|> <|cite_start|> (Reference: Program Synthesis using Abstraction Refinement: We present a new approach to example-guided program synthesis based on counterexample-guided abstraction refinement. Our method uses the abstract semantics of the underlying DSL to find a program $P$ whose abstract behavior satisfies the examples. However, since program $P$ may be spurious with respect to the concrete semantics, our approach iteratively refines the abstraction until we either find a program that satisfies the examples or prove that no such DSL program exists. Because many programs have the same input-output behavior in terms of their abstract semantics, this synthesis methodology significantly reduces the search space compared to existing techniques that use purely concrete semantics. While synthesis using abstraction refinement (SYNGAR) could be implemented in different settings, we propose a refinement-based synthesis algorithm that uses abstract finite tree automata (AFTA). Our technique uses a coarse initial program abstraction to construct an initial AFTA, which is iteratively refined by constructing a proof of incorrectness of any spurious program. In addition to ruling out the spurious program accepted by the previous AFTA, proofs of incorrectness are also useful for ruling out many other spurious programs. We implement these ideas in a framework called \tool. We have used the BLAZE framework to build synthesizers for string and matrix transformations, and we compare BLAZE with existing techniques. Our results for the string domain show that BLAZE compares favorably with FlashFill, a domain-specific synthesizer that is now deployed in Microsoft PowerShell. In the context of matrix manipulations, we compare BLAZE against Prose, a state-of-the-art general-purpose VSA-based synthesizer, and show that BLAZE results in a 90x speed-up over Prose.) <|cite_end|>, and information extraction <|cite_start|> (Reference: Flashextract: a framework for data extraction by examples: Various document types that combine model and view (e.g., text files, webpages, spreadsheets) make it easy to organize (possibly hierarchical) data, but make it difficult to extract raw data for any further manipulation or querying. We present a general framework FlashExtract to extract relevant data from semi-structured documents using examples. It includes: (a) an interaction model that allows end-users to give examples to extract various fields and to relate them in a hierarchical organization using structure and sequence constructs. (b) an inductive synthesis algorithm to synthesize the intended program from few examples in any underlying domain-specific language for data extraction that has been built using our specified algebra of few core operators (map, filter, merge, and pair). We describe instantiation of our framework to three different domains: text files, webpages, and spreadsheets. On our benchmark comprising 75 documents, FlashExtract is able to extract intended data using an average of 2.36 examples in 0.84 seconds per field.) <|cite_end|>. Such problems have much in common with our work, but they
have typically been set up as searches over a space of program transformation operations
rather than searches over collections of context-free grammar rules. Particularly
inspiring for our work was the development of FlashMeta <|cite_start|> (Reference: FlashMeta: a framework for inductive program synthesis: Inductive synthesis, or programming-by-examples (PBE) is gaining prominence with disruptive applications for automating repetitive tasks in end-user programming. However, designing, developing, and maintaining an effective industrial-quality inductive synthesizer is an intellectual and engineering challenge, requiring 1-2 man-years of effort. Our novel observation is that many PBE algorithms are a natural fall-out of one generic meta-algorithm and the domain-specific properties of the operators in the underlying domain-specific language (DSL). The meta-algorithm propagates example-based constraints on an expression to its subexpressions by leveraging associated witness functions, which essentially capture the inverse semantics of the underlying operator. This observation enables a novel program synthesis methodology called data-driven domain-specific deduction (D4), where domain-specific insight, provided by the DSL designer, is separated from the synthesis algorithm. Our FlashMeta framework implements this methodology, allowing synthesizer developers to generate an efficient synthesizer from the mere DSL definition (if properties of the DSL operators have been modeled). In our case studies, we found that 10+ existing industrial-quality mass-market applications based on PBE can be cast as instances of D4. Our evaluation includes reimplementation of some prior works, which in FlashMeta become more efficient, maintainable, and extensible. As a result, FlashMeta-based PBE tools are deployed in several industrial products, including Microsoft PowerShell 3.0 for Windows 10, Azure Operational Management Suite, and Microsoft Cortana digital assistant.) <|cite_end|>and Prose <|cite_start|> (Reference: Prose: Prose is a fabrication, not a linguistic axiom. It has a complex history well before its intricate literary genealogy. Made, not given, prose comes down to modern use with the form, formally determined, of a world-historical invention. As culturally significant in its evolutionary advent as in its ramified means of reporting event, prose thus bears with it a biography as telling as the fictional narratives it eventually serves to recount. Born of empiricism and print culture, prose is neither neutered poetry nor transcribed speech. Only its immediate ancestry is oratorical. Nonetheless, when “modern prose” is launched by leaving embellished declamatory models behind for the reign, first of epistemological lucidity, later of verisimilitude in narrative fiction, the oral is not thereby cancelled entirely. For prose, not unlike poetry, makes—and shapes—its way by incorporating the subvocal underlay of alphabetic (hence phonemic) language into the rhythms of its evoked readerly enunciation. It is in this fashion, by tapping its own linguistic platform or substrate, that prose comes to seem, more than otherwise, a medium rather than just one among several contested rhetorical means. Long after the modified or overthrown “plain style” taken up by early fiction like that of Daniel Defoe or Jonathan Swift, prose’s developing tendency to recover language’s silent phonetic resonance anticipates, in turn, one major Victorian inheritance from the complexities of Romantic verse sonority: a legacy that renders, ever afterward, the idea of “prose poetics” anything but an oxymoron. Here, too, is where the idea of “style” persists as an ongoing flashpoint for literary response. From Charles Dickens and Herman Melville to Joseph Conrad, for instance, we hear the potential sounding of theme in the depth charges of fictional prose. At the same time, from Jane Austen to Virginia Woolf, we can track an alternate mode of deflected orality in the “free indirect discourse” of surfaced inner speech—not overheard talk, these elicited mental monologues, but their own kind of artificial and subliminal eavesdropping—as they channel the cadences of represented psychology. Channel: in precisely that sense of a medium by which prose can best be understood and studied, both in the ecology of modern literary communication and in its reframing by media theory.) <|cite_end|>. These systems are ``meta'' program synthesis engines---they help
engineers design program synthesis tools for different domain-specific languages. Similarly, \Sys is a
``meta'' framework for syntax-guided grammar induction, helping users perform grammar induction in domain-specific contexts. Of course, \Sys, FlashMeta and Prose differ greatly when it comes to specifics
of their language/system designs and the underlying search
algorithms implemented.
\paragraph*{Logic Program Synthesis} We were also inspired by work on Inductive Logic Programming <|cite_start|> (Reference: Logical and relational learning: ) <|cite_end|>, and Logic Program Synthesis <|cite_start|> (Reference: Provenance-Guided Synthesis of Datalog Programs: We propose a new approach to synthesize Datalog programs from input-output specifications. Our approach leverages query provenance to scale the counterexample-guided inductive synthesis (CEGIS) procedure for program synthesis. In each iteration of the procedure, a SAT solver proposes a candidate Datalog program, and a Datalog solver evaluates the proposed program to determine whether it meets the desired specification. Failure to satisfy the specification results in additional constraints to the SAT solver. We propose efficient algorithms to learn these constraints based on “why” and “why not” provenance information obtained from the Datalog solver. We have implemented our approach in a tool called ProSynth and present experimental results that demonstrate significant improvements over the state-of-the-art, including in synthesizing invented predicates, reducing running times, and in decreasing variances in synthesis performance. On a suite of 40 synthesis tasks from three different domains, ProSynth is able to synthesize the desired program in 10 seconds on average per task—an order of magnitude faster than baseline approaches—and takes only under a second each for 28 of them.) <|cite_end|> <|cite_start|> (Reference: Synthesizing Datalog Programs Using Numerical Relaxation: The problem of learning logical rules from examples arises in diverse fields, including program synthesis, logic programming, and machine learning. Existing approaches either involve solving computationally difficult combinatorial problems, or performing parameter estimation in complex statistical models. In this paper, we present Difflog, a technique to extend the logic programming language Datalog to the continuous setting. By attaching real-valued weights to individual rules of a Datalog program, we naturally associate numerical values with individual conclusions of the program. Analogous to the strategy of numerical relaxation in optimization problems, we can now first determine the rule weights which cause the best agreement between the training labels and the induced values of output tuples, and subsequently recover the classical discrete-valued target program from the continuous optimum. We evaluate Difflog on a suite of 34 benchmark problems from recent literature in knowledge discovery, formal verification, and database query-by-example, and demonstrate significant improvements in learning complex programs with recursive rules, invented predicates, and relations of arbitrary arity.) <|cite_end|>. Parsing with context-free grammars is a special case of logic programming so it was natural to investigate whether inductive logic programming algorithms would work well here. ProSynth <|cite_start|> (Reference: Provenance-Guided Synthesis of Datalog Programs: We propose a new approach to synthesize Datalog programs from input-output specifications. Our approach leverages query provenance to scale the counterexample-guided inductive synthesis (CEGIS) procedure for program synthesis. In each iteration of the procedure, a SAT solver proposes a candidate Datalog program, and a Datalog solver evaluates the proposed program to determine whether it meets the desired specification. Failure to satisfy the specification results in additional constraints to the SAT solver. We propose efficient algorithms to learn these constraints based on “why” and “why not” provenance information obtained from the Datalog solver. We have implemented our approach in a tool called ProSynth and present experimental results that demonstrate significant improvements over the state-of-the-art, including in synthesizing invented predicates, reducing running times, and in decreasing variances in synthesis performance. On a suite of 40 synthesis tasks from three different domains, ProSynth is able to synthesize the desired program in 10 seconds on average per task—an order of magnitude faster than baseline approaches—and takes only under a second each for 28 of them.) <|cite_end|>is a state-of-the-art algorithm in this field so we experimented with it as a tool for
grammatical inference. However, we found our custom algorithm almost always
outperformed ProSynth on grammatical inference tasks. <|paper_end|> | [
"<|reference_start|> {Search-Based Program Synthesis: A promising, useful tool for future programming development environments. <|reference_end|>",
"<|reference_start|> Synthesis of Data Completion Scripts using Finite Tree Automata: In application domains that store data in a tabular format, a common task is to fill the values of some cells using values stored in other cells. For instance, such data completion tasks arise in the context of missing value imputation in data science and derived data computation in spreadsheets and relational databases. Unfortunately, end-users and data scientists typically struggle with many data completion tasks that require non-trivial programming expertise. This paper presents a synthesis technique for automating data completion tasks using programming-by-example (PBE) and a very lightweight sketching approach. Given a formula sketch (e.g., AVG($?_1$, $?_2$)) and a few input-output examples for each hole, our technique synthesizes a program to automate the desired data completion task. Towards this goal, we propose a domain-specific language (DSL) that combines spatial and relational reasoning over tabular data and a novel synthesis algorithm that can generate DSL programs that are consistent with the input-output examples. The key technical novelty of our approach is a new version space learning algorithm that is based on finite tree automata (FTA). The use of FTAs in the learning algorithm leads to a more compact representation that allows more sharing between programs that are consistent with the examples. We have implemented the proposed approach in a tool called DACE and evaluate it on 84 benchmarks taken from online help forums. We also illustrate the advantages of our approach by comparing our technique against two existing synthesizers, namely PROSE and SKETCH. <|reference_end|>",
"<|reference_start|> FlashRelate: extracting relational data from semi-structured spreadsheets using examples: With hundreds of millions of users, spreadsheets are one of the most important end-user applications. Spreadsheets are easy to use and allow users great flexibility in storing data. This flexibility comes at a price: users often treat spreadsheets as a poor man's database, leading to creative solutions for storing high-dimensional data. The trouble arises when users need to answer queries with their data. Data manipulation tools make strong assumptions about data layouts and cannot read these ad-hoc databases. Converting data into the appropriate layout requires programming skills or a major investment in manual reformatting. The effect is that a vast amount of real-world data is \"locked-in\" to a proliferation of one-off formats. We introduce FlashRelate, a synthesis engine that lets ordinary users extract structured relational data from spreadsheets without programming. Instead, users extract data by supplying examples of output relational tuples. FlashRelate uses these examples to synthesize a program in Flare. Flare is a novel extraction language that extends regular expressions with geometric constructs. An interactive user interface on top of FlashRelate lets end users extract data by point-and-click. We demonstrate that correct Flare programs can be synthesized in seconds from a small set of examples for 43 real-world scenarios. Finally, our case study demonstrates FlashRelate's usefulness addressing the widespread problem of data trapped in corporate and government formats. <|reference_end|>",
"<|reference_start|> Synthesizing Bijective Lenses: Bidirectional transformations between different data representations occur frequently in modern software systems. They appear as serializers and deserializers, as database views and view updaters, and more. Manually building bidirectional transformations---by writing two separate functions that are intended to be inverses---is tedious and error prone. A better approach is to use a domain-specific language in which both directions can be written as a single expression. However, these domain-specific languages can be difficult to program in, requiring programmers to manage fiddly details while working in a complex type system. To solve this, we present Optician, a tool for type-directed synthesis of bijective string transformers. The inputs to Optician are two ordinary regular expressions representing two data formats and a few concrete examples for disambiguation. The output is a well-typed program in Boomerang (a bidirectional language based on the theory of lenses). The main technical challenge involves navigating the vast program search space efficiently enough. Unlike most prior work on type-directed synthesis, our system operates in the context of a language with a rich equivalence relation on types (the theory of regular expressions). We synthesize terms of a equivalent language and convert those generated terms into our lens language. We prove the correctness of our synthesis algorithm. We also demonstrate empirically that our new language changes the synthesis problem from one that admits intractable solutions to one that admits highly efficient solutions. We evaluate Optician on a benchmark suite of 39 examples including both microbenchmarks and realistic examples derived from other data management systems including Flash Fill, a tool for synthesizing string transformations in spreadsheets, and Augeas, a tool for bidirectional processing of Linux system configuration files. <|reference_end|>"
] | [
3,
4,
5,
7
] | {"<|cite_2|>": "ss-1191553", "<|cite_5|>": "ss-1010551", "<|cite_6|>": "ss-1191553", "<|cite_9|>": "arxiv-182119", "<|multi_cite_10_1|>": "ss-1725233", "<|multi_cite_10_2|>": "ss-1725234", "<|multi_cite_12_1|>": "ss-1725235", "<|multi_cite_12_2|>": "ss-713328", "<|multi_cite_12_3|>": "ss-1017108", "<|multi_cite_12_4|>": "ss-988699", "<|multi_cite_12_5|>": "ss-1725236", "<|multi_cite_12_6|>": "ss-1695323", "<|cite_13|>": "arxiv-218103", "<|cite_14|>": "ss-1055164", "<|multi_cite_15_1|>": "ss-1055164", "<|multi_cite_15_2|>": "arxiv-218103", "<|multi_cite_19_1|>": "ss-1725235", "<|multi_cite_19_2|>": "ss-713328", "<|multi_cite_19_3|>": "ss-1017108", "<|multi_cite_19_4|>": "ss-988699", "<|multi_cite_19_5|>": "ss-1725236", "<|multi_cite_19_6|>": "ss-1695323", "<|cite_20|>": "arxiv-218103", "<|cite_21|>": "ss-1055164", "<|multi_cite_22_1|>": "ss-1055164", "<|multi_cite_22_2|>": "arxiv-218103", "<|cite_24|>": "arxiv-199420", "<|cite_25|>": "ss-897564", "<|multi_cite_26_1|>": "arxiv-136828", "<|multi_cite_26_2|>": "arxiv-177772", "<|multi_cite_26_3|>": "ss-1142845", "<|cite_27|>": "ss-1725237", "<|cite_28|>": "arxiv-182119", "<|cite_29|>": "arxiv-182119", "<|cite_30|>": "arxiv-182119", "<|multi_cite_32_1|>": "ss-1725233", "<|multi_cite_32_2|>": "ss-1725234", "<|multi_cite_33_1|>": "ss-911062", "<|multi_cite_33_2|>": "ss-1104209", "<|multi_cite_33_3|>": "ss-1388508", "<|cite_34|>": "ss-713328", "<|multi_cite_35_1|>": "ss-1017108", "<|multi_cite_35_2|>": "ss-988699", "<|cite_36|>": "arxiv-134848", "<|cite_37|>": "arxiv-668432", "<|cite_38|>": "ss-1010551", "<|cite_39|>": "ss-1191553", "<|cite_40|>": "ss-1055164", "<|cite_41|>": "arxiv-103434", "<|cite_42|>": "arxiv-218103", "<|cite_43|>": "ss-1064959", "<|multi_cite_44_1|>": "ss-911062", "<|multi_cite_44_2|>": "ss-1104209", "<|multi_cite_44_4|>": "ss-1388508", "<|multi_cite_45_2|>": "arxiv-128538", "<|multi_cite_45_3|>": "ss-891729", "<|multi_cite_46_1|>": "arxiv-177772", "<|multi_cite_46_2|>": "arxiv-136828", "<|multi_cite_46_3|>": "arxiv-137808", "<|multi_cite_47_1|>": "ss-1018109", "<|cite_48|>": "ss-1288219", "<|cite_49|>": "ss-1725237", "<|cite_50|>": "ss-1699901", "<|multi_cite_51_1|>": "ss-1231258", "<|multi_cite_51_2|>": "arxiv-207340", "<|cite_52|>": "ss-1231258"} |
2312.13927 | <|paper_start|> Title: On the Convergence of Loss and Uncertainty-based Active Learning Algorithms
Abstract: On the Convergence of Loss and Uncertainty-based Active Learning Algorithms: We investigate the convergence rates and data sample sizes required for training a machine learning model using a stochastic gradient descent (SGD) algorithm, where data points are sampled based on either their loss value or uncertainty value. These training methods are particularly relevant for active learning and data subset selection problems. For SGD with a constant step size update, we present convergence results for linear classifiers and linearly separable datasets using squared hinge loss and similar training loss functions. Additionally, we extend our analysis to more general classifiers and datasets, considering a wide range of loss-based sampling strategies and smooth convex training loss functions. We propose a novel algorithm called Adaptive-Weight Sampling (AWS) that utilizes SGD with an adaptive step size that achieves stochastic Polyak's step size in expectation. We establish convergence rate results for AWS for smooth convex training loss functions. Our numerical experiments demonstrate the efficiency of AWS on various datasets by using either exact or estimated loss values.
Introduction
In practice, such as in computer vision, natural language processing and speech recognition, it is required to train machine learning models for prediction tasks (classification or regression), having abundant non-labelled data and costly access to their corresponding labels. Active learning algorithms aim at efficiently learning a prediction model by using a strategy for label acquisition. The goal is to minimize the number of labels used to train a prediction model.
Different label acquisition strategies have been proposed which - in one way or another - aim at selecting informative points for the underlying model training task. A popular label acquisition strategy is based on estimating uncertainty, which may be seen as a self-disagreement about prediction by given prediction model. We refer to algorithms using an uncertainty acquisition strategy as \emph{uncertainty-based} active learning algorithms. An approach that prioritizes querying of labels for points with high estimated loss has been discussed by <|cite_start|> (Reference: Learning Loss for Active Learning: The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named "loss prediction module," to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.) <|cite_end|> <|cite_start|> (Reference: DEUP: Direct Epistemic Uncertainty Prediction: Epistemic Uncertainty is a measure of the lack of knowledge of a learner which diminishes with more evidence. While existing work focuses on using the variance of the Bayesian posterior due to parameter uncertainty as a measure of epistemic uncertainty, we argue that this does not capture the part of lack of knowledge induced by model misspecification. We discuss how the excess risk, which is the gap between the generalization error of a predictor and the Bayes predictor, is a sound measure of epistemic uncertainty which captures the effect of model misspecification. We thus propose a principled framework for directly estimating the excess risk by learning a secondary predictor for the generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability. We discuss the merits of this novel measure of epistemic uncertainty, and highlight how it differs from variance-based measures of epistemic uncertainty and addresses its major pitfall. Our framework, Direct Epistemic Uncertainty Prediction (DEUP) is particularly interesting in interactive learning environments, where the learner is allowed to acquire novel examples in each round. Through a wide set of experiments, we illustrate how existing methods in sequential model optimization can be improved with epistemic uncertainty estimates from DEUP, and how DEUP can be used to drive exploration in reinforcement learning. We also evaluate the quality of uncertainty estimates from DEUP for probabilistic image classification and predicting synergies of drug combinations.) <|cite_end|> <|cite_start|> (Reference: Loss-based active learning for named entity recognition: This paper addresses the practical issue of lacking training data when building named entity recognition (NER) systems. To this aim, we introduce a new active learning method for reducing the number of training samples required by the underlying NER system. Different from prior work that only focuses on training data, we define a new loss function that when estimating loss and uncertainty scores of training samples for selection, it takes also into account the uncertainty of the $K$ unlabelled test instances most similar to the unlabelled training instances. Experimental results on both general domain and clinical benchmark datasets show that the proposed active learning method allows to train the NER system with between 5% to 7% less training data compared to state of the art uncertainty sampling methods, while retaining high NER effectiveness.) <|cite_end|> <|cite_start|> (Reference: Loss prediction: End-to-end active learning approach for speech recognition: End-to-end speech recognition systems usually require huge amounts of labeling resource, while annotating the speech data is complicated and expensive. Active learning is the solution by selecting the most valuable samples for annotation. In this paper, we proposed to use a predicted loss that estimates the uncertainty of the sample. The CTC (Connectionist Temporal Classification) and attention loss are informative for speech recognition since they are computed based on all decoding paths and alignments. We defined an end-to-end active learning pipeline, training an ASR/LP (Automatic Speech Recognition/Loss Prediction) joint model. The proposed approach was validated on an English and a Chinese speech recognition task. The experiments show that our approach achieves competitive results, outperforming random selection, least confidence, and estimated loss method.) <|cite_end|>. This approach may be seen as selecting points for which there is a high disagreement between the prediction model and an oracle, measured by a loss function. We refer to algorithms using an acquisition function depending on a loss as \emph{loss-based} active learning algorithms.
Convergence guarantees for some common uncertainty sampling strategies have been established only recently, e.g. for margin of confidence sampling <|cite_start|> (Reference: Convergence of Uncertainty Sampling for Active Learning: Uncertainty sampling in active learning is heavily used in practice to reduce the annotation cost. However, there has been no wide consensus on the function to be used for uncertainty estimation in binary classification tasks and convergence guarantees of the corresponding active learning algorithms are not well understood. The situation is even more challenging for multi-category classification. In this work, we propose an efficient uncertainty estimator for binary classification which we also extend to multiple classes, and provide a non-asymptotic rate of convergence for our uncertainty sampling-based active learning algorithm in both cases under no-noise conditions (i.e., linearly separable data). We also extend our analysis to the noisy case and provide theoretical guarantees for our algorithm under the influence of noise in the task of binary and multi-class classification.) <|cite_end|>. For loss-based active learning algorithms, there are limited results on their convergence properties. Other active learning strategies include query-by-committee <|cite_start|> (Reference: Query by committee: We propose an algorithm called query by commitee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.) <|cite_end|>, expected model change <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>, expected error reduction <|cite_start|> (Reference: Toward Optimal Active Learning through Sampling Estimation of Error
Reduction: This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other meth-ods are popular because for many learning models, closed form calculation of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two real-world data sets we reach high accuracy very quickly, sometimes with four times fewer labeled examples than competing methods.) <|cite_end|>, expected variance reduction <|cite_start|> (Reference: Ambiguity-based multiclass active learning: Most existing works on active learning (AL) focus on binary classification problems, which limit their applications in various real-world scenarios. One solution to multiclass AL (MAL) is evaluating the informativeness of unlabeled samples by an uncertainty model and selecting the most uncertain one for query. In this paper, an ambiguity-based strategy is proposed to tackle this problem by applying a possibility approach. First, the possibilistic memberships of unlabeled samples in the multiple classes are calculated from the one-against-all-based support vector machine model. Then, by employing fuzzy logic operators, these memberships are aggregated into a new concept named k-order ambiguity, which estimates the risk of labeling a sample among k classes. Afterward, the k-order ambiguities are used to form an overall ambiguity measure to evaluate the uncertainty of the unlabeled samples. Finally, the sample with the maximum ambiguity is selected for query, and a new MAL strategy is developed. Experiments demonstrate the feasibility and effectiveness of the proposed method.) <|cite_end|>, and mutual information maximization <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: Unifying Approaches in Active Learning and Active Sampling via Fisher Information and Information-Theoretic Quantities: Recently proposed methods in data subset selection, that is active learning and active sampling, use Fisher information, Hessians, similarity matrices based on gradients, and gradient lengths to estimate how informative data is for a model's training. Are these different approaches connected, and if so, how? We revisit the fundamentals of Bayesian optimal experiment design and show that these recently proposed methods can be understood as approximations to information-theoretic quantities: among them, the mutual information between predictions and model parameters, known as expected information gain or BALD in machine learning, and the mutual information between predictions of acquisition candidates and test samples, known as expected predictive information gain. We develop a comprehensive set of approximations using Fisher information and observed information and derive a unified framework that connects seemingly disparate literature. Although Bayesian methods are often seen as separate from non-Bayesian ones, the sometimes fuzzy notion of"informativeness"expressed in various non-Bayesian objectives leads to the same couple of information quantities, which were, in principle, already known by Lindley (1956) and MacKay (1992).) <|cite_end|>.
The focus of this paper is on establishing convergence guarantees for loss and uncertainty-based active learning. This is studied for stream-based active algorithms, where model training is performed by using a stochastic gradient descent algorithm with a loss or an uncertainty-based label acquisition function. For loss-based sampling, we assume that the active learner has access to an oracle having an unbiased estimate of conditional expected loss of a point, conditional on the feature vector of the point and the current model parameter. Our results can be seen as a first step towards understanding loss-based sampling strategies, under an oracle that knows the conditional distribution of the label of a point. In our experiments we evaluate the effect of the loss estimator noise.
Our contributions can be summarised as follows:
$\bullet$ We provide a set of conditions under which a non-asymptotic rate of convergence of order $O(1/n)$ holds, where $n$ is number of iterations of the algorithm, i.e. number of unlabeled points presented to the algorithm. This set of conditions allows us to show convergence rate results for loss-based sampling, for linearly separable datasets with the loss function being either squared hinge loss function, generalised hinge loss function, or satisfying some other conditions. We show both bounds for expected loss and number of sampled points. We provide a key lemma that allows us to obtain convergence rate results under various assumptions, for both loss and uncertainty-based sampling. This lemma generalizes the proof technique used by <|cite_start|> (Reference: Convergence of Uncertainty Sampling for Active Learning: Uncertainty sampling in active learning is heavily used in practice to reduce the annotation cost. However, there has been no wide consensus on the function to be used for uncertainty estimation in binary classification tasks and convergence guarantees of the corresponding active learning algorithms are not well understood. The situation is even more challenging for multi-category classification. In this work, we propose an efficient uncertainty estimator for binary classification which we also extend to multiple classes, and provide a non-asymptotic rate of convergence for our uncertainty sampling-based active learning algorithm in both cases under no-noise conditions (i.e., linearly separable data). We also extend our analysis to the noisy case and provide theoretical guarantees for our algorithm under the influence of noise in the task of binary and multi-class classification.) <|cite_end|> to prove convergence rate for their family of uncertainty sampling strategies. The lemma may be of independent interest for further studies of convergence properties.
$\bullet$ We provide a framework for establishing convergence rate bounds for sampling points according to an increasing function of the expected conditional loss of a point. This is based on showing that for such sampling strategies, the algorithm is a stochastic gradient descent algorithm with an underlying objective function, and, thus, known convergence rate results for stochastic gradient descent algorithm can be deployed. The underlying framework allows us to cover a larger set of loss functions than for our first set of conditions.
$\bullet$ We propose an active learning algorithm that combines a label sampling strategy and adaptive step size stochastic gradient descent update according to stochastic Polyak's step size <|cite_start|> (Reference: Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence: We propose a stochastic variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method. Although computing the Polyak step-size requires knowledge of the optimal function values, this information is readily available for typical modern machine learning applications. Consequently, the proposed stochastic Polyak step-size (SPS) is an attractive choice for setting the learning rate for stochastic gradient descent (SGD). We provide theoretical convergence guarantees for SGD equipped with SPS in different settings, including strongly convex, convex and non-convex functions. Furthermore, our analysis results in novel convergence guarantees for SGD with a constant step-size. We show that SPS is particularly effective when training over-parameterized models capable of interpolating the training data. In this setting, we prove that SPS enables SGD to converge to the true solution at a fast rate without requiring the knowledge of any problem-dependent constants or additional computational overhead. We experimentally validate our theoretical results via extensive experiments on synthetic and real datasets. We demonstrate the strong performance of SGD with SPS compared to state-of-the-art optimization methods when training over-parameterized models.) <|cite_end|>. We show a condition on the sampling strategy under which a non-asymptotic convergence rate of order $O(1/n)$ holds for smooth convex loss functions.
$\bullet$ Our conditions are expressed in general terms, allowing us to accommodate binary and multi-class classification tasks, as well as regression tasks. We focus on applying our conditions to classification tasks.
$\bullet$ We show numerical results that demonstrate efficiency of sampling with stochastic Polyak's step size, and robustness to loss estimation noise.
\subsection{Related work}
Early proposal of the query-by-committee (QBC) algorithm <|cite_start|> (Reference: Query by committee: We propose an algorithm called query by commitee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.) <|cite_end|> demonstrated benefits of active learning, which was analyzed under the selective sampling model by <|cite_start|> (Reference: Selective Sampling Using the Query by Committee Algorithm: ) <|cite_end|> and <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>. <|cite_start|> (Reference: Analysis of Perceptron-Based Active Learning: ) <|cite_end|> showed that the performance of QBC can be achieved by a modified perceptron algorithm
whose complexity of an update does not increase with the number of updates. Efficient and label-optimal learning of halfspaces was studied by <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> and by <|cite_start|> (Reference: On the Power of Localized Perceptron for Label-Optimal Learning of Halfspaces with Adversarial Noise: We study {\em online} active learning of homogeneous halfspaces in $\mathbb{R}^d$ with adversarial noise where the overall probability of a noisy label is constrained to be at most $\nu$. Our main contribution is a Perceptron-like online active learning algorithm that runs in polynomial time, and under the conditions that the marginal distribution is isotropic log-concave and $\nu = \Omega(\epsilon)$, where $\epsilon \in (0, 1)$ is the target error rate, our algorithm PAC learns the underlying halfspace with near-optimal label complexity of $\tilde{O}\big(d \cdot polylog(\frac{1}{\epsilon})\big)$ and sample complexity of $\tilde{O}\big(\frac{d}{\epsilon} \big)$. Prior to this work, existing online algorithms designed for tolerating the adversarial noise are subject to either label complexity polynomial in $\frac{1}{\epsilon}$, or suboptimal noise tolerance, or restrictive marginal distributions. With the additional prior knowledge that the underlying halfspace is $s$-sparse, we obtain attribute-efficient label complexity of $\tilde{O}\big( s \cdot polylog(d, \frac{1}{\epsilon}) \big)$ and sample complexity of $\tilde{O}\big(\frac{s}{\epsilon} \cdot polylog(d) \big)$. As an immediate corollary, we show that under the agnostic model where no assumption is made on the noise rate $\nu$, our active learner achieves an error rate of $O(OPT) + \epsilon$ with the same running time and label and sample complexity, where $OPT$ is the best possible error rate achievable by any homogeneous halfspace.) <|cite_end|>. Online active learning algorithms have been studied under the name of selective sampling, e.g. <|cite_start|> (Reference: Worst-Case Analysis of Selective Sampling for Linear Classification: A selective sampling algorithm is a learning algorithm for classification that, based on the past observed data, decides whether to ask the label of each new instance to be classified. In this paper, we introduce a general technique for turning linear-threshold classification algorithms from the general additive family into randomized selective sampling algorithms. For the most popular algorithms in this family we derive mistake bounds that hold for individual sequences of examples. These bounds show that our semi-supervised algorithms can achieve, on average, the same accuracy as that of their fully supervised counterparts, but using fewer labels. Our theoretical results are corroborated by a number of experiments on real-world textual data. The outcome of these experiments is essentially predicted by our theoretical results: Our selective sampling algorithms tend to perform as well as the algorithms receiving the true label after each classification, while observing in practice substantially fewer labels.) <|cite_end|> <|cite_start|> (Reference: Robust bounds for classification via selective sampling: We introduce a new algorithm for binary classification in the selective sampling protocol. Our algorithm uses Regularized Least Squares (RLS) as base classifier, and for this reason it can be efficiently run in any RKHS. Unlike previous margin-based semi-supervised algorithms, our sampling condition hinges on a simultaneous upper bound on bias and variance of the RLS estimate under a simple linear label noise model. This fact allows us to prove performance bounds that hold for an arbitrary sequence of instances. In particular, we show that our sampling strategy approximates the margin of the Bayes optimal classifier to any desired accuracy ε by asking Õ (d/ε2) queries (in the RKHS case d is replaced by a suitable spectral quantity). While these are the standard rates in the fully supervised i.i.d. case, the best previously known result in our harder setting was Õ (d3/ε4). Preliminary experiments show that some of our algorithms also exhibit a good practical performance.) <|cite_end|> <|cite_start|> (Reference: Selective sampling and active learning from single and multiple teachers: We present a new online learning algorithm in the selective sampling framework, where labels must be actively queried before they are revealed. We prove bounds on the regret of our algorithm and on the number of labels it queries when faced with an adaptive adversarial strategy of generating the instances. Our bounds both generalize and strictly improve over previous bounds in similar settings. Additionally, our selective sampling algorithm can be converted into an efficient statistical active learning algorithm. We extend our algorithm and analysis to the multiple-teacher setting, where the algorithm can choose which subset of teachers to query for each label. Finally, we demonstrate the effectiveness of our techniques on a real-world Internet search problem.) <|cite_end|> <|cite_start|> (Reference: Better algorithms for selective sampling: We study online algorithms for selective sampling that use regularized least squares (RLS) as base classifier. These algorithms typically perform well in practice, and some of them have formal guarantees on their mistake and query rates. We refine and extend these guarantees in various ways, proposing algorithmic variants that exhibit better empirical behavior while enjoying performance guarantees under much more general conditions. We also show a simple way of coupling a generic gradient-based classifier with a specific RLS-based selective sampler, obtaining hybrid algorithms with combined performance guarantees.) <|cite_end|> <|cite_start|> (Reference: Learning noisy linear classifiers via adaptive and selective sampling: ) <|cite_end|> <|cite_start|> (Reference: Selective sampling algorithms for cost-sensitive multiclass prediction: In this paper, we study the problem of active learning for cost-sensitive multiclass classification. We propose selective sampling algorithms, which process the data in a streaming fashion, querying only a subset of the labels. For these algorithms, we analyze the regret and label complexity when the labels are generated according to a generalized linear model. We establish that the gains of active learning over passive learning can range from none to exponentially large, based on a natural notion of margin. We also present a safety guarantee to guard against model mismatch. Numerical simulations show that our algorithms indeed obtain a low regret with a small number of queries.) <|cite_end|>. See <|cite_start|> (Reference: Algorithms For Reinforcement Learning Synthesis Lectures On Artificial Intelligence And Machine Learning: The articles presented here were selected from preliminary versions presented at the International Conference on Genetic Algorithms in June 1991, as well as at a special Workshop on Genetic Algorithms for Machine Learning at the same Conference. Genetic algorithms are general-purpose search algorithms that use principles inspired by natural population genetics to evolve solutions to problems. The basic idea is to maintain a population of knowledge structure that represent candidate solutions to the problem of interest. The population evolves over time through a process of competition (i.e. survival of the fittest) and controlled variation (i.e. recombination and mutation). Genetic Algorithms for Machine Learning contains articles on three topics that have not been the focus of many previous articles on GAs, namely concept learning from examples, reinforcement learning for control, and theoretical analysis of GAs. It is hoped that this sample will serve to broaden the acquaintance of the general machine learning community with the major areas of work on GAs. The articles in this book address a number of central issues in applying GAs to machine learning problems. For example, the choice of appropriate representation and the corresponding set of genetic learning operators is an important set of decisions facing a user of a genetic algorithm. The study of genetic algorithms is proceeding at a robust pace. If experimental progress and theoretical understanding continue to evolve as expected, genetic algorithms will continue to provide a distinctive approach to machine learning. Genetic Algorithms for Machine Learning is an edited volume of original research made up of invited contributions by leading researchers. This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic) <|cite_end|> for a survey.
Uncertainty sampling was used for classification tasks as early as in <|cite_start|> (Reference: A Sequential Algorithm for Training Text Classifiers: The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.) <|cite_end|>, and subsequently in many other works, e.g. <|cite_start|> (Reference: Less is more: {{Active}} learning with support vector machines: We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a number of benefits, the most surprising of which is that a SVM trained on a well-chosen subset of the available corpus frequently performs better than one trained on all available data. The heuristic for choosing this subset is simple to compute, and makes no use of information about the test set. Given that the training time of SVMs depends heavily on the training set size, our heuristic not only offers better performance with fewer data, it frequently does so in less time than the naive approach of training on all available data.) <|cite_end|> <|cite_start|> (Reference: Active Learning With Sampling by Uncertainty and Density for Data Annotations: To solve the knowledge bottleneck problem, active learning has been widely used for its ability to automatically select the most informative unlabeled examples for human annotation. One of the key enabling techniques of active learning is uncertainty sampling, which uses one classifier to identify unlabeled examples with the least confidence. Uncertainty sampling often presents problems when outliers are selected. To solve the outlier problem, this paper presents two techniques, sampling by uncertainty and density (SUD) and density-based re-ranking. Both techniques prefer not only the most informative example in terms of uncertainty criterion, but also the most representative example in terms of density criterion. Experimental results of active learning for word sense disambiguation and text classification tasks using six real-world evaluation data sets demonstrate the effectiveness of the proposed methods.) <|cite_end|> <|cite_start|> (Reference: Multi-Class Active Learning by Uncertainty Sampling with Diversity Maximization: ) <|cite_end|> <|cite_start|> (Reference: Active Learning Using Uncertainty Information: Many active learning methods belong to the retraining-based approaches, which select one unlabeled instance, add it to the training set with its possible labels, retrain the classification model, and evaluate the criteria that we base our selection on. However, since the true label of the selected instance is unknown, these methods resort to calculating the average-case or worse-case performance with respect to the unknown label. In this paper, we propose a different method to solve this problem. In particular, our method aims to make use of the uncertainty information to enhance the performance of retraining-based models. We apply our method to two state-of-the-art algorithms and carry out extensive experiments on a wide variety of real-world datasets. The results clearly demonstrate the effectiveness of the proposed method and indicate it can reduce human labeling efforts in many real-life applications.) <|cite_end|> <|cite_start|> (Reference: Online active learning in data stream regression using uncertainty sampling based on evolving generalized Fuzzy Models: In this paper, we propose three criteria for efficient sample selection in case of data stream regression problems within an online active learning context. The selection becomes important whenever the target values, which guide the update of the regressors as well as the implicit model structures, are costly or time-consuming to measure and also in case when very fast models updates are required to cope with stream mining real-time demands. Reducing the selected samples as much as possible while keeping the predictive accuracy of the models on a high level is, thus, a central challenge. This should be ideally achieved in unsupervised and single-pass manner. Our selection criteria rely on three aspects: 1) the extrapolation degree combined with the model's nonlinearity degree , which is measured in terms of a new specific homogeneity criterion among adjacent local approximators; 2) the uncertainty in model outputs, which can be measured in terms of confidence intervals using so-called adaptive local error bars — we integrate a weighted localization of an incremental noise level estimator and propose formulas for online merging of local error bars; 3) the uncertainty in model parameters, which is estimated by the so-called A-optimality criterion, which relies on the Fisher information matrix. The selection criteria are developed in combination with evolving generalized Takagi–Sugeno (TS) fuzzy models (containing rules in arbitrarily rotated position), as it could be shown in previous publications that these outperform conventional evolving TS models (containing axis-parallel rules). The results based on three high-dimensional real-world streaming problems show that a model update based on only 10%–20% selected samples can still achieve similar accumulated model errors over time to the case when performing a full model update on all samples. This can be achieved with a negligible sensitivity on the size of the active learning latency buffer. Random sampling with the same percentages of samples selected, however, achieved much higher error rates. Hence, the intelligence in our sample selection concept leads to an economic balance between model accuracy and measurement as well computational costs for model updates.) <|cite_end|>. <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> showed that threshold-based uncertainty sampling on a convex loss can be interpreted as performing a pre-conditioned stochastic gradient step on the population zero-one loss. None of these works have provided theoretical convergence guarantees. Different variants of uncertainty sampling include margin of confidence sampling, least confidence sampling, and entropy-based sampling <|cite_start|> (Reference: How to measure uncertainty in uncertainty sampling for active learning: ) <|cite_end|>. Convergence of margin of confidence sampling was recently studied by <|cite_start|> (Reference: Convergence of Uncertainty Sampling for Active Learning: Uncertainty sampling in active learning is heavily used in practice to reduce the annotation cost. However, there has been no wide consensus on the function to be used for uncertainty estimation in binary classification tasks and convergence guarantees of the corresponding active learning algorithms are not well understood. The situation is even more challenging for multi-category classification. In this work, we propose an efficient uncertainty estimator for binary classification which we also extend to multiple classes, and provide a non-asymptotic rate of convergence for our uncertainty sampling-based active learning algorithm in both cases under no-noise conditions (i.e., linearly separable data). We also extend our analysis to the noisy case and provide theoretical guarantees for our algorithm under the influence of noise in the task of binary and multi-class classification.) <|cite_end|>. They showed linear convergence for hinge loss function for a family of selection probability functions and an algorithm which performs a stochastic gradient descent update with respect to squared hinge loss function.
A loss-based active learning algorithm was proposed by <|cite_start|> (Reference: Learning Loss for Active Learning: The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named "loss prediction module," to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.) <|cite_end|>, which consists of a loss prediction module and a target prediction model. The algorithm uses the loss prediction module for computing a loss estimate and prioritizes sampling points with high estimated loss under current prediction model. <|cite_start|> (Reference: DEUP: Direct Epistemic Uncertainty Prediction: Epistemic Uncertainty is a measure of the lack of knowledge of a learner which diminishes with more evidence. While existing work focuses on using the variance of the Bayesian posterior due to parameter uncertainty as a measure of epistemic uncertainty, we argue that this does not capture the part of lack of knowledge induced by model misspecification. We discuss how the excess risk, which is the gap between the generalization error of a predictor and the Bayes predictor, is a sound measure of epistemic uncertainty which captures the effect of model misspecification. We thus propose a principled framework for directly estimating the excess risk by learning a secondary predictor for the generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability. We discuss the merits of this novel measure of epistemic uncertainty, and highlight how it differs from variance-based measures of epistemic uncertainty and addresses its major pitfall. Our framework, Direct Epistemic Uncertainty Prediction (DEUP) is particularly interesting in interactive learning environments, where the learner is allowed to acquire novel examples in each round. Through a wide set of experiments, we illustrate how existing methods in sequential model optimization can be improved with epistemic uncertainty estimates from DEUP, and how DEUP can be used to drive exploration in reinforcement learning. We also evaluate the quality of uncertainty estimates from DEUP for probabilistic image classification and predicting synergies of drug combinations.) <|cite_end|> generalize this idea in a framework for uncertainty prediction. Loss-based sampling can be seen to be in the spirit of perceptron algorithm <|cite_start|> (Reference: The perceptron: a probabilistic model for information storage and organization in the brain.: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus) <|cite_end|>, which updates model only for falsely-classified points. <|cite_start|> (Reference: Learning Loss for Active Learning: The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named "loss prediction module," to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.) <|cite_end|> and <|cite_start|> (Reference: DEUP: Direct Epistemic Uncertainty Prediction: Epistemic Uncertainty is a measure of the lack of knowledge of a learner which diminishes with more evidence. While existing work focuses on using the variance of the Bayesian posterior due to parameter uncertainty as a measure of epistemic uncertainty, we argue that this does not capture the part of lack of knowledge induced by model misspecification. We discuss how the excess risk, which is the gap between the generalization error of a predictor and the Bayes predictor, is a sound measure of epistemic uncertainty which captures the effect of model misspecification. We thus propose a principled framework for directly estimating the excess risk by learning a secondary predictor for the generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability. We discuss the merits of this novel measure of epistemic uncertainty, and highlight how it differs from variance-based measures of epistemic uncertainty and addresses its major pitfall. Our framework, Direct Epistemic Uncertainty Prediction (DEUP) is particularly interesting in interactive learning environments, where the learner is allowed to acquire novel examples in each round. Through a wide set of experiments, we illustrate how existing methods in sequential model optimization can be improved with epistemic uncertainty estimates from DEUP, and how DEUP can be used to drive exploration in reinforcement learning. We also evaluate the quality of uncertainty estimates from DEUP for probabilistic image classification and predicting synergies of drug combinations.) <|cite_end|> provided no theoretical guarantees for convergence rates.
Some analysis of convergence for loss and uncertainty-based active learning strategies were recently reported by <|cite_start|> (Reference: Understanding Uncertainty Sampling: Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples which the current prediction model is uncertain about. However, the usage of uncertainty sampling has been largely heuristic: (i) There is no consensus on the proper definition of "uncertainty" for a specific task under a specific loss; (ii) There is no theoretical guarantee that prescribes a standard protocol to implement the algorithm, for example, how to handle the sequentially arrived annotated data under the framework of optimization algorithms such as stochastic gradient descent. In this work, we systematically examine uncertainty sampling algorithms under both stream-based and pool-based active learning. We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function and establish that an uncertainty sampling algorithm essentially optimizes against such an equivalent loss. The perspective verifies the properness of existing uncertainty measures from two aspects: surrogate property and loss convexity. Furthermore, we propose a new notion for designing uncertainty measures called \textit{loss as uncertainty}. The idea is to use the conditional expected loss given the features as the uncertainty measure. Such an uncertainty measure has nice analytical properties and generality to cover both classification and regression problems, which enable us to provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings, in the full generality of the underlying model and problem. Lastly, we establish connections between certain variants of the uncertainty sampling algorithms with risk-sensitive objectives and distributional robustness, which can partly explain the advantage of uncertainty sampling algorithms when the sample size is small.) <|cite_end|>. In particular, they showed convergence results for sampling proportional to conditional expected loss. Our framework generalizes to sampling according to arbitrary continuous increasing functions of expected conditional loss. <|cite_start|> (Reference: Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence: We propose a stochastic variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method. Although computing the Polyak step-size requires knowledge of the optimal function values, this information is readily available for typical modern machine learning applications. Consequently, the proposed stochastic Polyak step-size (SPS) is an attractive choice for setting the learning rate for stochastic gradient descent (SGD). We provide theoretical convergence guarantees for SGD equipped with SPS in different settings, including strongly convex, convex and non-convex functions. Furthermore, our analysis results in novel convergence guarantees for SGD with a constant step-size. We show that SPS is particularly effective when training over-parameterized models capable of interpolating the training data. In this setting, we prove that SPS enables SGD to converge to the true solution at a fast rate without requiring the knowledge of any problem-dependent constants or additional computational overhead. We experimentally validate our theoretical results via extensive experiments on synthetic and real datasets. We demonstrate the strong performance of SGD with SPS compared to state-of-the-art optimization methods when training over-parameterized models.) <|cite_end|> proposed a stochastic gradient descent algorithm with adaptive stochastic Polyak's step size. Theoretical convergence guarantees were obtained under different assumptions. The algorithm was demonstrated to achieve strong performance compared to state-of-the-art optimization methods for training some over-parametrized models. Our work proposes a sampling method that performs stochastic Polyak's step size in expectation and we show a convergence rate guarantee for smooth convex loss functions.
Related Work
\label{sec:background}
\paragraph{Algorithm} We consider projected stochastic gradient descent algorithm defined as follows: given an initial value $\theta_1$, for $t\geq 1$,
\begin{equation}
\theta_{t+1} = \mathcal{P}_{\Theta_0}\left(\theta_t - z_t \nabla_\theta \ell(x_t,y_t,\theta_t)\right)
\label{equ:sgd}
\end{equation}
where $z_t$ is a stochastic step size with mean $\zeta(x_t,y_t,\theta_t)$ for some function $\zeta: \mathcal{X}\times \mathcal{Y}\times \Theta \mapsto \mathbb{R}_+$, $\Theta_0 \subseteq \Theta$ and $\mathcal{P}_\Theta$ is the projection function, i.e. $\mathcal{P}_{\Theta_0}(u) = \arg\min_{v\in \Theta\cap \Theta_0}||u-v||$. Unless specified otherwise, we consider the case $\Theta_0 = \Theta$ which requires no projection.
For the choice of the stochastic step size, we consider two cases: (a) Bernoulli sampling with constant stepsize and (b) stochastic Polyak's step size. For case (a), $z_t$ is the product of a constant stepsize $\gamma$ and a Bernoulli random variable with mean $\pi(x_t,y_t,\theta_t)$. For case (b), $\zeta(x,y,\theta)$ is "stochastic" Polyak's step size and $z_t$ is equal to $\zeta(x_t,y_t,\theta_t)/\pi(x_t,y_t,\theta_t)$ with probability $\pi(x_t,y_t,\theta_t)$ and is equal to $0$ otherwise.
\paragraph{Linearly separable data} For binary classification tasks, $\mathcal{Y} = \{-1,1\}$. We say that data is separable if for every point $(x,y)\in \mathcal{X}\times \mathcal{Y}$ either $y = 1$ with probability $1$ or $y = -1$ with probability $1$. The data is said to be linearly separable if there exists $\theta^*\in \Theta$ such that $y = \mathrm{sgn}(x^\top \theta^*)$ for every $x\in \mathcal{X}$. A linearly separable data has $\rho^*$-margin if $|x^\top \theta^*|\geq \rho^*$ for every $x\in \mathcal{X}$, for some $\theta^*\in \Theta$.
\paragraph{Linear classifiers} Some of our results are for linear classifiers, for which predicted label of a point $x$ is a function of $x^\top \theta$. For example, a model with predicted label $\mathrm{sgn}(x^\top \theta)$ is a linear classifier. For logistic regression, predicted label is $1$ with probability $\sigma(x^\top \theta)$ and is $-1$, otherwise, where $\sigma$ is logistic function $\sigma(z) = 1/(1+e^{-z})$.
\paragraph{Smooth loss functions} For any given $(x,y)\in \mathcal{X}\times \mathcal{Y}$, loss function $\ell(x,y,\theta)$ is said to be smooth on $\Theta'\subseteq \Theta$ if it has Lipschitz continuous gradient on $\Theta'$, i.e. there exists $L_{x,y}$ such that $||\nabla_{\theta}\ell(x,y,\theta_1)-\nabla_\theta \ell (x,y,\theta_2)||\leq L_{x,y} ||\theta_1-\theta_2||$, for all $\theta_1,\theta_2\in \Theta'$. For any distribution $q$ over $\mathcal{X}\times \mathcal{Y}$, $\mathbb{E}_{(x,y)\sim q}[\ell(x,y,\theta)]$ is $\mathbb{E}_{(x,y)\sim q}[L_{x,y}]$-smooth. <|paper_end|> | [
"<|reference_start|> In Advances in Neural Information Processing Systems: <|reference_end|>",
"<|reference_start|> In Advances in Neural Information Processing Systems: <|reference_end|>",
"<|reference_start|> Algorithms For Reinforcement Learning Synthesis Lectures On Artificial Intelligence And Machine Learning: The articles presented here were selected from preliminary versions presented at the International Conference on Genetic Algorithms in June 1991, as well as at a special Workshop on Genetic Algorithms for Machine Learning at the same Conference. Genetic algorithms are general-purpose search algorithms that use principles inspired by natural population genetics to evolve solutions to problems. The basic idea is to maintain a population of knowledge structure that represent candidate solutions to the problem of interest. The population evolves over time through a process of competition (i.e. survival of the fittest) and controlled variation (i.e. recombination and mutation). Genetic Algorithms for Machine Learning contains articles on three topics that have not been the focus of many previous articles on GAs, namely concept learning from examples, reinforcement learning for control, and theoretical analysis of GAs. It is hoped that this sample will serve to broaden the acquaintance of the general machine learning community with the major areas of work on GAs. The articles in this book address a number of central issues in applying GAs to machine learning problems. For example, the choice of appropriate representation and the corresponding set of genetic learning operators is an important set of decisions facing a user of a genetic algorithm. The study of genetic algorithms is proceeding at a robust pace. If experimental progress and theoretical understanding continue to evolve as expected, genetic algorithms will continue to provide a distinctive approach to machine learning. Genetic Algorithms for Machine Learning is an edited volume of original research made up of invited contributions by leading researchers. This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic <|reference_end|>",
"<|reference_start|> DEUP: Direct Epistemic Uncertainty Prediction: Epistemic Uncertainty is a measure of the lack of knowledge of a learner which diminishes with more evidence. While existing work focuses on using the variance of the Bayesian posterior due to parameter uncertainty as a measure of epistemic uncertainty, we argue that this does not capture the part of lack of knowledge induced by model misspecification. We discuss how the excess risk, which is the gap between the generalization error of a predictor and the Bayes predictor, is a sound measure of epistemic uncertainty which captures the effect of model misspecification. We thus propose a principled framework for directly estimating the excess risk by learning a secondary predictor for the generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability. We discuss the merits of this novel measure of epistemic uncertainty, and highlight how it differs from variance-based measures of epistemic uncertainty and addresses its major pitfall. Our framework, Direct Epistemic Uncertainty Prediction (DEUP) is particularly interesting in interactive learning environments, where the learner is allowed to acquire novel examples in each round. Through a wide set of experiments, we illustrate how existing methods in sequential model optimization can be improved with epistemic uncertainty estimates from DEUP, and how DEUP can be used to drive exploration in reinforcement learning. We also evaluate the quality of uncertainty estimates from DEUP for probabilistic image classification and predicting synergies of drug combinations. <|reference_end|>"
] | [
6,
17,
25,
39
] | {"<|multi_cite_1_1|>": "arxiv-203424", "<|multi_cite_1_2|>": "arxiv-321702", "<|multi_cite_1_3|>": "ss-2249339", "<|multi_cite_1_4|>": "ss-2563152", "<|cite_17|>": "arxiv-377843", "<|cite_18|>": "ss-1511792", "<|cite_19|>": "ss-832115", "<|cite_20|>": "ss-1112426", "<|cite_21|>": "ss-1834521", "<|multi_cite_22_1|>": "ss-832115", "<|multi_cite_22_2|>": "ss-2249340", "<|cite_2|>": "arxiv-377843", "<|cite_23|>": "ss-740580", "<|cite_24|>": "ss-1511792", "<|cite_3|>": "ss-1100514", "<|cite_4|>": "ss-832115", "<|cite_5|>": "ss-1064609", "<|cite_6|>": "ss-832115", "<|cite_7|>": "arxiv-311106", "<|multi_cite_25_1|>": "ss-1184081", "<|multi_cite_25_2|>": "ss-1517417", "<|multi_cite_25_3|>": "ss-1021456", "<|multi_cite_25_4|>": "ss-1517418", "<|multi_cite_25_5|>": "ss-1517419", "<|multi_cite_25_6|>": "ss-1517420", "<|cite_8|>": "ss-687712", "<|cite_26|>": "arxiv-668391", "<|multi_cite_27_1|>": "ss-1491103", "<|multi_cite_27_2|>": "ss-755917", "<|multi_cite_27_3|>": "ss-1094860", "<|multi_cite_27_4|>": "arxiv-117707", "<|multi_cite_27_5|>": "ss-1550343", "<|cite_9|>": "ss-832115", "<|cite_28|>": "ss-1247100", "<|cite_10|>": "arxiv-377843", "<|cite_11|>": "arxiv-203424", "<|cite_12|>": "arxiv-321702", "<|cite_29|>": "ss-1276265", "<|cite_13|>": "arxiv-203424", "<|cite_14|>": "arxiv-321702", "<|cite_15|>": "arxiv-521193", "<|cite_16|>": "ss-740580"} |
2107.07110-0 | <|paper_start|> Title: Compact and Optimal Deep Learning with Recurrent Parameter Generators
Abstract: Compact and Optimal Deep Learning with Recurrent Parameter Generators: Deep learning has achieved tremendous success by training increasingly large models, which are then compressed for practical deployment. We propose a drastically different approach to compact and optimal deep learning: We decouple the Degrees of freedom (DoF) and the actual number of parameters of a model, optimize a small DoF with predefined random linear constraints for a large model of arbitrary architecture, in one-stage end-to-end learning. Specifically, we create a recurrent parameter generator (RPG), which repeatedly fetches parameters from a ring and unpacks them onto a large model with random permutation and sign flipping to promote parameter decorrelation. We show that gradient descent can automatically find the best model under constraints with faster convergence. Our extensive experimentation reveals a log-linear relationship between model DoF and accuracy. Our RPG demonstrates remarkable DoF reduction and can be further pruned and quantized for additional run-time performance gain. For example, in terms of top-1 accuracy on ImageNet, RPG achieves $96\%$ of ResNet18's performance with only $18\%$ DoF (the equivalent of one convolutional layer) and $52\%$ of ResNet34's performance with only $0.25\%$ DoF! Our work shows a significant potential of constrained neural optimization in compact and optimal deep learning.
Introduction
Deep learning has achieved great success with increasingly more training data and deeper \& larger neural networks: A recently developed NLP model, GPT-3 <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>, has astonishingly 175 billion parameters!
While the model performance generally scales with the number of parameters <|cite_start|> (Reference: Scaling Laws for Autoregressive Generative Modeling: We identify empirical scaling laws for the cross-entropy loss in four domains: generative image modeling, video modeling, multimodal image$\leftrightarrow$text models, and mathematical problem solving. In all cases autoregressive Transformers smoothly improve in performance as model size and compute budgets increase, following a power-law plus constant scaling law. The optimal model size also depends on the compute budget through a power-law, with exponents that are nearly universal across all data domains. The cross-entropy loss has an information theoretic interpretation as $S($True$) + D_{\mathrm{KL}}($True$||$Model$)$, and the empirical scaling laws suggest a prediction for both the true data distribution's entropy and the KL divergence between the true and model distributions. With this interpretation, billion-parameter Transformers are nearly perfect models of the YFCC100M image distribution downsampled to an $8\times 8$ resolution, and we can forecast the model size needed to achieve any given reducible loss (ie $D_{\mathrm{KL}}$) in nats/image for other resolutions. We find a number of additional scaling laws in specific domains: (a) we identify a scaling relation for the mutual information between captions and images in multimodal models, and show how to answer the question "Is a picture worth a thousand words?"; (b) in the case of mathematical problem solving, we identify scaling laws for model performance when extrapolating beyond the training distribution; (c) we finetune generative image models for ImageNet classification and find smooth scaling of the classification loss and error rate, even as the generative loss levels off. Taken together, these results strengthen the case that scaling laws have important implications for neural network performance, including on downstream tasks.) <|cite_end|>, with parameters outnumbering training data, the model is significantly over-parameterized. Tremendous effort has been made to reduce the parameter redundancy from different perspectives, including neural network pruning <|cite_start|> (Reference: Optimal {Brain} {Damage}: We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.) <|cite_end|> <|cite_start|> (Reference: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.) <|cite_end|> <|cite_start|> (Reference: Rethinking the Value of Network Pruning: Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.) <|cite_end|> <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|>, efficient network design spaces <|cite_start|> (Reference: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications: We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.) <|cite_end|> <|cite_start|> (Reference: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size: Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here: this https URL) <|cite_end|> <|cite_start|> (Reference: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet.) <|cite_end|> <|cite_start|> (Reference: MobileNetV2: Inverted Residuals and Linear Bottlenecks: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters) <|cite_end|>, parameter regularization <|cite_start|> (Reference: Regularization of Neural Networks Using DropConnect: We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.) <|cite_end|> <|cite_start|> (Reference: Orthogonal Convolutional Neural Networks: Deep convolutional neural networks are hindered by training instability and feature redundancy towards further performance improvement. A promising solution is to impose orthogonality on convolutional filters. We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using the common kernel orthogonality approach, which we show is only necessary but not sufficient for ensuring orthogonal convolutions. Our proposed orthogonal convolution requires no additional parameters and little computational overhead. This method consistently outperforms the kernel orthogonality alternative on a wide range of tasks such as image classification and inpainting under supervised, semi-supervised and unsupervised settings. Further, it learns more diverse and expressive features with better training stability, robustness, and generalization. Our code is publicly available at https://github.com/samaonline/Orthogonal-Convolutional-Neural-Networks.) <|cite_end|> <|cite_start|> (Reference: {Dropout: A Simple Way to Prevent Neural Networks from Overfitting: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.) <|cite_end|>, model quantization <|cite_start|> (Reference: Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.) <|cite_end|> <|cite_start|> (Reference: XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks: We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.) <|cite_end|> <|cite_start|> (Reference: Relaxed Quantization for Discretized Neural Networks: Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.) <|cite_end|>, neural architecture search <|cite_start|> (Reference: Neural Architecture Search with Reinforcement Learning: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.) <|cite_end|> <|cite_start|> (Reference: ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware: Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. $10^4$ GPU hours) makes it difficult to \emph{directly} search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize~\emph{proxy} tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present \emph{ProxylessNAS} that can \emph{directly} learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08\% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6$\times$ fewer parameters. On ImageNet, our model achieves 3.1\% better top-1 accuracy than MobileNetV2, while being 1.2$\times$ faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.) <|cite_end|> <|cite_start|> (Reference: FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions: Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS's search space is small when compared to other search methods', since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to $10^{14}\times$ over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421$\times$ less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision.) <|cite_end|>, recurrent models <|cite_start|> (Reference: Deep Equilibrium Models: We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq .) <|cite_end|> <|cite_start|> (Reference: Multiscale Deep Equilibrium Models: We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ), suited to large-scale and highly hierarchical pattern recognition domains. An MDEQ directly solves for and backpropagates through the equilibrium points of multiple feature resolutions simultaneously, using implicit differentiation to avoid storing intermediate states (and thus requiring only $O(1)$ memory consumption). These simultaneously-learned multi-resolution features allow us to train a single model on a diverse set of tasks and loss functions, such as using a single MDEQ to perform both image classification and semantic segmentation. We illustrate the effectiveness of this approach on two large-scale vision tasks: ImageNet classification and semantic segmentation on high-resolution images from the Cityscapes dataset. In both settings, MDEQs are able to match or exceed the performance of recent competitive computer vision models: the first time such performance and scale have been achieved by an implicit deep learning approach. The code and pre-trained models are at https://github.com/locuslab/mdeq .) <|cite_end|> <|cite_start|> (Reference: Convolutional Pose Machines: Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.) <|cite_end|>, multi-task feature encoding <|cite_start|> (Reference: SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation: We introduce SharpNet, a method that predicts an accurate depth map for an input color image, with a particular attention to the reconstruction of occluding contours: Occluding contours are an important cue for object recognition, and for realistic integration of virtual objects in Augmented Reality, but they are also notoriously difficult to reconstruct accurately. For example, they are a challenge for stereo-based reconstruction methods, as points around an occluding contour are visible in only one image. Inspired by recent methods that introduce normal estimation to improve depth prediction, we introduce a novel term that constrains depth and occluding contours predictions. Since ground truth depth is difficult to obtain with pixel-perfect accuracy along occluding contours, we use synthetic images for training, followed by fine-tuning on real data. We demonstrate our approach on the challenging NYUv2-Depth dataset, and show that our method outperforms the state-of-the-art along occluding contours, while performing on par with the best recent methods for the rest of the images. Its accuracy along the occluding contours is actually better than the `ground truth' acquired by a depth camera based on structured light. We show this by introducing a new benchmark based on NYUv2-Depth for evaluating occluding contours in monocular reconstruction, which is our second contribution.) <|cite_end|> <|cite_start|> (Reference: Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation: Non-Autoregressive machine Translation (NAT) models have demonstrated significant inference speedup but suffer from inferior translation accuracy. The common practice to tackle the problem is transferring the Autoregressive machine Translation (AT) knowledge to NAT models, e.g., with knowledge distillation. In this work, we hypothesize and empirically verify that AT and NAT encoders capture different linguistic properties of source sentences. Therefore, we propose to adopt Multi-Task learning to transfer the AT knowledge to NAT models through encoder sharing. Specifically, we take the AT model as an auxiliary task to enhance NAT model performance. Experimental results on WMT14 English-German and WMT16 English-Romanian datasets show that the proposed Multi-Task NAT achieves significant improvements over the baseline NAT models. Furthermore, the performance on large-scale WMT19 and WMT20 English-German datasets confirm the consistency of our proposed method. In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.) <|cite_end|>, etc.
One of the most prominent approaches in this direction is the pruning-based model compression, which dates back to the late 80s or early 90s <|cite_start|> (Reference: {Using Relevance to Reduce Network Size Automatically: This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and imp...) <|cite_end|> <|cite_start|> (Reference: Optimal {Brain} {Damage}: We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.) <|cite_end|>and has enjoyed a resurgence <|cite_start|> (Reference: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.) <|cite_end|> <|cite_start|> (Reference: What is the State of Neural Network Pruning?: Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.) <|cite_end|>recently. These pruning methods seek to remove the unimportant parameters from a pre-trained large neural network and can frequently achieve an enormous model-compression ratio.
\figModelRing{tp}
Though sharing a similar motivation to reduce the parameter redundancy, we explore an entirely different territory to model compression: rather than compressing a large model, we define an arbitrarily large model based on a fixed set of parameters to maximize the model capacity. In this work, we propose to define many different layers in a deep neural network based on a fixed amount of parameters, which we call \textit{recurrent parameter generator} (RPG). We show that the excess parameter capacity can be exploited with almost no assumptions of how the parameters are assigned to the neural network architecture. In other words, there is excess capacity in neural network models independent of how and where the parameters are used in the network. Even at the level of individual scalar values, parameters can be reused in another arbitrary location of the deep network architecture without significantly impacting model performance. Surprisingly, backpropagation training of a deep network is able to cope with that the same parameter can be assigned to multiple random locations in the network without significantly impacting model performance. Through extensive experiments, we show that a large neural network does not need to be over overparameterized to achieve competitive performance. Particularly, a Resnet18 model can be implemented with the number of weights equivalent to one convolution layer in a conventional Resnet ($4.72\times$ parameter reduction) and still achieve $67.2\%$ ImageNet top-1 accuracy. The proposed method is also extremely flexible in reducing the model parameters. In some sense, the proposed RPG method can be viewed as an automatic model pruning technique, which explores the optimal accuracy-parameter trade-off. When we reduce the model parameter, RPG shows graceful performance degradation, and its compression results are frequently on par or even better than the SOTA pruning methods besides the flexibility. Even if we reduce the Resnet18 backbone parameters to $36$K, which is about $300\times$ reduction, Resnet18 can still achieve $40.0\%$ ImageNet top-1 accuracy.
Notably, we choose a destructive parameter sharing method <|cite_start|> (Reference: Superposition of many models into one: We present a method for storing multiple models within a single set of parameters. Models can coexist in superposition and still be retrieved individually. In experiments with neural networks, we show that a surprisingly large number of models can be effectively stored within a single parameter instance. Furthermore, each of these models can undergo thousands of training steps without significantly interfering with other models within the superposition. This approach may be viewed as the online complement of compression: rather than reducing the size of a network after training, we make use of the unrealized capacity of a network during training.) <|cite_end|>for RPG in this work, which discourages any potential representation sharing from layer to layer. Compared to other recurrent weight-sharing methods, e.g., convolutional pose machine (CPM) or multi-scale deep equilibrium models (MDEQ), our method achieves competitive or even better performance on various benchmarks. This makes RPG a strong baseline for probing whether there is nontrivial representation sharing within any recurrent network.
To summarize, we make the following contributions:
\vspace{-0.8em}
\begin{enumerate}
\setlength{\itemsep}{-0em}
\item We propose the recurrent parameter generator (RPG), which decouples network architecture and the number of parameters. Given a certain neural network architecture, we can flexibly choose any number of parameters to construct the network.
\item Given a compression ratio, RPG achieves on-par or even better performance compared to the state-of-the-art model pruning methods. This may provide an entirely new perspective for model compression.
\item With destructive weight sharing, RPG achieves competitive performance compared to several recurrent weight-sharing models. This makes RPG a strong baseline for probing the representation sharing for recurrent models.
\end{enumerate}
\figtasks{tp}
Related Work
There are many important efforts to reduce the redundancy in neural network parameters. We discuss each of the approaches and their relationship to our work.
\textbf{Model Pruning, Neural Architecture Search, and Quantization.}
As we discussed earlier, model pruning seeks to remove the unimportant parameters in a trained model. Recently, it's proposed to use neural architecture search as a coarse-grained model pruning <|cite_start|> (Reference: Slimmable Neural Networks: We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of training individual networks with different width configurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the fly according to on-device benchmarks and resource constraints, rather than downloading and offloading different models. Our trained networks, named slimmable neural networks, achieve similar (and in many cases better) ImageNet classification accuracy than individually trained models of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications including COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and discuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>. Another related effort is neural network quantization <|cite_start|> (Reference: Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations: We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.) <|cite_end|> <|cite_start|> (Reference: XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks: We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.) <|cite_end|> <|cite_start|> (Reference: Relaxed Quantization for Discretized Neural Networks: Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.) <|cite_end|>, which seeks to reduce the bits used for each parameter and can frequently reduce the model size by $4\times$ with minimal accuracy drop. A more recent work <|cite_start|> (Reference: Fast and Accurate Model Scaling: In this work we analyze strategies for convolutional neural network scaling; that is, the process of scaling a base convolutional network to endow it with greater computational complexity and consequently representational power. Example scaling strategies may include increasing model width, depth, resolution, etc. While various scaling strategies exist, their tradeoffs are not fully understood. Existing analysis typically focuses on the interplay of accuracy and flops (floating point operations). Yet, as we demonstrate, various scaling strategies affect model parameters, activations, and consequently actual runtime quite differently. In our experiments we show the surprising result that numerous scaling strategies yield networks with similar accuracy but with widely varying properties. This leads us to propose a simple fast compound scaling strategy that encourages primarily scaling model width, while scaling depth and resolution to a lesser extent. Unlike currently popular scaling strategies, which result in about $O(s)$ increase in model activation w.r.t. scaling flops by a factor of $s$, the proposed fast compound scaling results in close to $O(\sqrt{s})$ increase in activations, while achieving excellent accuracy. This leads to comparable speedups on modern memory-limited hardware (e.g., GPU, TPU). More generally, we hope this work provides a framework for analyzing and selecting scaling strategies under various computational constraints.) <|cite_end|>presents a framework for analyzing model scaling strategies that takes into account network properties such as FLOPs and activations.
\textbf{Parameter Regularization and Priors}
Another highly related direction to this work is parameter regularization. Regularization has been widely used to reduce model redundancy <|cite_start|> (Reference: A Simple Weight Decay Can Improve Generalization: It has been observed in numerical simulations that a weight decay can improve generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk.) <|cite_end|>, alleviate model overfitting <|cite_start|> (Reference: {Dropout: A Simple Way to Prevent Neural Networks from Overfitting: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.) <|cite_end|> <|cite_start|> (Reference: Regularization of Neural Networks Using DropConnect: We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.) <|cite_end|>, and ensure desired mathematical regularity <|cite_start|> (Reference: Orthogonal Convolutional Neural Networks: Deep convolutional neural networks are hindered by training instability and feature redundancy towards further performance improvement. A promising solution is to impose orthogonality on convolutional filters. We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using the common kernel orthogonality approach, which we show is only necessary but not sufficient for ensuring orthogonal convolutions. Our proposed orthogonal convolution requires no additional parameters and little computational overhead. This method consistently outperforms the kernel orthogonality alternative on a wide range of tasks such as image classification and inpainting under supervised, semi-supervised and unsupervised settings. Further, it learns more diverse and expressive features with better training stability, robustness, and generalization. Our code is publicly available at https://github.com/samaonline/Orthogonal-Convolutional-Neural-Networks.) <|cite_end|>. The RPG can be viewed as a parameter regularization in the sense that weight sharing poses many equality constraints to weights and regularizes weights to a low-dimensional space. HyperNeat <|cite_start|> (Reference: {A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks: Research in neuroevolutionthat is, evolving artificial neural networks (ANNs) through evolutionary algorithmsis inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.) <|cite_end|>and CPPNs <|cite_start|> (Reference: {Compositional Pattern Producing Networks: A Novel Abstraction of Development: Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings , i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a mature form. A major challenge in in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies. In this paper, a novel abstraction of natural development, called Compositional Pattern Producing Networks (CPPNs), is proposed. Unlike currently accepted abstractions such as iterative rewrite systems and cellular growth simulations, CPPNs map to the phenotype without local interaction, that is, each individual component of the phenotype is determined independently of every other component. Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.) <|cite_end|>use neural networks to determine the weight between two neurons as a function of their positions. Similarly, <|cite_start|> (Reference: Probabilistic Meta-Representations Of Neural Networks: Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently. Here, we consider a richer prior distribution in which units in the network are represented by latent variables, and the weights between units are drawn conditionally on the values of the collection of those variables. This allows rich correlations between related weights, and can be seen as realizing a function prior with a Bayesian complexity regularizer ensuring simple solutions. We illustrate the resulting meta-representations and representations, elucidating the power of this prior.) <|cite_end|> <|cite_start|> (Reference: Hierarchical Gaussian Process Priors for Bayesian Neural Network Weights: Probabilistic neural networks are typically modeled with independent weight priors, which do not capture weight correlations in the prior and do not provide a parsimonious interface to express properties in function space. A desirable class of priors would represent weights compactly, capture correlations between weights, facilitate calibrated reasoning about uncertainty, and allow inclusion of prior knowledge about the function space such as periodicity or dependence on contexts such as inputs. To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for network weights based on unit embeddings that can flexibly encode correlated weight structures, and (ii) input-dependent versions of these weight priors that can provide convenient ways to regularize the function space through the use of kernels defined on contextual inputs. We show these models provide desirable test-time uncertainty estimates on out-of-distribution data, demonstrate cases of modeling inductive biases for neural networks with kernels which help both interpolation and extrapolation from training data, and demonstrate competitive predictive performance on an active learning benchmark.) <|cite_end|>introduced a similar idea by providing a hierarchical prior for the neural network weights.
\textbf{Recurrent Networks and Deep Equilibrium Models.} Recurrence and feedback have been shown in psychology and neuroscience to act as modulators or competitive inhibitors to aid feature grouping <|cite_start|> (Reference: Brain States: Top-Down Influences in Sensory Processing: ) <|cite_end|>, figure-ground segregation <|cite_start|> (Reference: Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons: ) <|cite_end|>and object recognition <|cite_start|> (Reference: The limits of feedforward vision: Recurrent processing promotes robust object recognition when objects are degraded: Everyday vision requires robustness to a myriad of environmental factors that degrade stimuli. Foreground clutter can occlude objects of interest, and complex lighting and shadows can decrease the contrast of items. How does the brain recognize visual objects despite these low-quality inputs? On the basis of predictions from a model of object recognition that contains excitatory feedback, we hypothesized that recurrent processing would promote robust recognition when objects were degraded by strengthening bottom–up signals that were weakened because of occlusion and contrast reduction. To test this hypothesis, we used backward masking to interrupt the processing of partially occluded and contrast reduced images during a categorization experiment. As predicted by the model, we found significant interactions between the mask and occlusion and the mask and contrast, such that the recognition of heavily degraded stimuli was differentially impaired by masking. The model provided a close fit of these results in an isomorphic version of the experiment with identical stimuli. The model also provided an intuitive explanation of the interactions between the mask and degradations, indicating that masking interfered specifically with the extensive recurrent processing necessary to amplify and resolve highly degraded inputs, whereas less degraded inputs did not require much amplification and could be rapidly resolved, making them less susceptible to masking. Together, the results of the experiment and the accompanying model simulations illustrate the limits of feedforward vision and suggest that object recognition is better characterized as a highly interactive, dynamic process that depends on the coordination of multiple brain areas.) <|cite_end|>. Recurrence-inspired mechanisms also achieve success in feed-forward models. There are two main types of employing recurrence based on if weights are shared across recurrent modules. ResNet <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|>, a representative of reusing similar structures without weight sharing, introduces parallel residual connections and achieves better performance by going deeper in networks. Similarly, some works <|cite_start|> (Reference: Going Deeper with Convolutions: We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.) <|cite_end|> <|cite_start|> (Reference: Highway Networks: There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on "information highways". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures.) <|cite_end|>also suggest iteratively injecting thus-far representations to the feed-forward network useful. Stacked inference methods <|cite_start|> (Reference: Pose Machines: Articulated Pose Estimation via Inference Machines: ) <|cite_end|> <|cite_start|> (Reference: Original Contribution: Stacked generalization: ) <|cite_end|> <|cite_start|> (Reference: Structured Prediction Cascades: Structured prediction tasks pose a fundamental trade-off between the need for model complexity to increase predictive power and the limited computational resources for inference in the exponentially-sized output spaces such models require. We formulate and develop the Structured Prediction Cascade architecture: a sequence of increasingly complex models that progressively filter the space of possible outputs. The key principle of our approach is that each model in the cascade is optimized to accurately filter and refine the structured output state space of the next model, speeding up both learning and inference in the next layer of the cascade. We learn cascades by optimizing a novel convex loss function that controls the trade-off between the filtering efficiency and the accuracy of the cascade, and provide generalization bounds for both accuracy and efficiency. We also extend our approach to intractable models using tree-decomposition ensembles, and provide algorithms and theory for this setting. We evaluate our approach on several large-scale problems, achieving state-of-the-art performance in handwriting recognition and human pose recognition. We find that structured prediction cascades allow tremendous speedups and the use of previously intractable features and models in both settings.) <|cite_end|>are also related while they consider each output in isolation. Several works find sharing weights across recurrent modules beneficial. They demonstrate applications in temporal modelling <|cite_start|> (Reference: Structured Prediction Cascades: Structured prediction tasks pose a fundamental trade-off between the need for model complexity to increase predictive power and the limited computational resources for inference in the exponentially-sized output spaces such models require. We formulate and develop the Structured Prediction Cascade architecture: a sequence of increasingly complex models that progressively filter the space of possible outputs. The key principle of our approach is that each model in the cascade is optimized to accurately filter and refine the structured output state space of the next model, speeding up both learning and inference in the next layer of the cascade. We learn cascades by optimizing a novel convex loss function that controls the trade-off between the filtering efficiency and the accuracy of the cascade, and provide generalization bounds for both accuracy and efficiency. We also extend our approach to intractable models using tree-decomposition ensembles, and provide algorithms and theory for this setting. We evaluate our approach on several large-scale problems, achieving state-of-the-art performance in handwriting recognition and human pose recognition. We find that structured prediction cascades allow tremendous speedups and the use of previously intractable features and models in both settings.) <|cite_end|> <|cite_start|> (Reference: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting: The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.) <|cite_end|> <|cite_start|> (Reference: Deep Visual-Semantic Alignments for Generating Image Descriptions: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.) <|cite_end|>, spatial attention <|cite_start|> (Reference: Recurrent Models of Visual Attention: Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.) <|cite_end|> <|cite_start|> (Reference: Optimal scanning for faster object detection: Recent years have seen the development of fast and accurate algorithms for detecting objects in images. However, as the size of the scene grows, so do the running-times of these algorithms. If a 128×102 pixel image requires 20 ms to process, searching for objects in a 1280×1024 image will take 2 s. This is unsuitable under real-time operating constraints: by the time a frame has been processed, the object may have moved. An analogous problem occurs when controlling robot camera that need to scan scenes in search of target objects. In this paper, we consider a method for improving the run-time of general-purpose object-detection algorithms. Our method is based on a model of visual search in humans, which schedules eye fixations to maximize the long-term information accrued about the location of the target of interest. The approach can be used to drive robot cameras that physically scan scenes or to improve the scanning speed for very large high resolution images. We consider the latter application in this work by simulating a “digital fovea” and sequentially placing it in various regions of an image in a way that maximizes the expected information gain. We evaluate the approach using the OpenCV version of the Viola-Jones face detector. After accounting for all computational overhead introduced by the fixation controller, the approach doubles the speed of the standard Viola-Jones detector at little cost in accuracy.) <|cite_end|>, pose estimation <|cite_start|> (Reference: Convolutional Pose Machines: Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.) <|cite_end|> <|cite_start|> (Reference: Human Pose Estimation with Iterative Error Feedback: Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.) <|cite_end|>, and so on <|cite_start|> (Reference: Iterative Instance Segmentation: Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible. While incorporating structure into the model should improve prediction quality, doing so is challenging - manually specifying the form of structural constraints may be impractical and inference often becomes intractable even if structural constraints are given. We sidestep this problem by reducing structured prediction to a sequence of unconstrained prediction problems and demonstrate that this approach is capable of automatically discovering priors on shape, contiguity of region predictions and smoothness of region contours from data without any a priori specification. On the instance segmentation task, this method outperforms the state-of-the-art, achieving a mean $\mathrm{AP}^{r}$ of 63.6% at 50% overlap and 43.3% at 70% overlap.) <|cite_end|> <|cite_start|> (Reference: Feedback Networks: Currently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iteration's output. We establish that a feedback based approach has several fundamental advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback networks develop a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We put forth a general feedback based learning architecture with the endpoint results on par or better than existing feedforward networks with the addition of the above advantages. We also investigate several mechanisms in feedback architectures (e.g. skip connections in time) and design choices (e.g. feedback length). We hope this study offers new perspectives in quest for more natural and practical learning models.) <|cite_end|>. Such methods usually shine in modeling long-term dependencies. In this work, we recurrently share weights across different layers of a feedback network to reduce network redundancy.
Given stacking weight-shared modules improve the performance, researchers consider running even infinite depth of such modules by making the sequential modules converge to a fixed point <|cite_start|> (Reference: A Theoretical Framework for Back-Propagation: ) <|cite_end|> <|cite_start|> (Reference: Deep Equilibrium Models: We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq .) <|cite_end|>. Employing such \textit{equilibrium} models to existing networks, they show improved performance in many natural language processing <|cite_start|> (Reference: Deep Equilibrium Models: We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq .) <|cite_end|>and computer vision tasks <|cite_start|> (Reference: Multiscale Deep Equilibrium Models: We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ), suited to large-scale and highly hierarchical pattern recognition domains. An MDEQ directly solves for and backpropagates through the equilibrium points of multiple feature resolutions simultaneously, using implicit differentiation to avoid storing intermediate states (and thus requiring only $O(1)$ memory consumption). These simultaneously-learned multi-resolution features allow us to train a single model on a diverse set of tasks and loss functions, such as using a single MDEQ to perform both image classification and semantic segmentation. We illustrate the effectiveness of this approach on two large-scale vision tasks: ImageNet classification and semantic segmentation on high-resolution images from the Cityscapes dataset. In both settings, MDEQs are able to match or exceed the performance of recent competitive computer vision models: the first time such performance and scale have been achieved by an implicit deep learning approach. The code and pre-trained models are at https://github.com/locuslab/mdeq .) <|cite_end|> <|cite_start|> (Reference: Implicit Feature Pyramid Network for Object Detection: In this paper, we present an implicit feature pyramid network (i-FPN) for object detection. Existing FPNs stack several cross-scale blocks to obtain large receptive field. We propose to use an implicit function, recently introduced in deep equilibrium model (DEQ), to model the transformation of FPN. We develop a residual-like iteration to updates the hidden states efficiently. Experimental results on MS COCO dataset show that i-FPN can significantly boost detection performance compared to baseline detectors with ResNet-50-FPN: +3.4, +3.2, +3.5, +4.2, +3.2 mAP on RetinaNet, Faster-RCNN, FCOS, ATSS and AutoAssign, respectively.) <|cite_end|>. One issue with deep equilibrium models is that the forward and backward propagation usually takes much more iterations than explicit feed-forward networks. Some work <|cite_start|> (Reference: Fixed point networks: Implicit depth models with jacobian-free backprop: A growing trend in deep learning replaces fixed depth models by approximations of the limit as network depth approaches infinity. This approach uses a portion of network weights to prescribe behavior by defining a limit condition. This makes network depth implicit , varying based on the provided data and an error tolerance. Moreover, existing implicit models can be implemented and trained with fixed memory costs in exchange for additional computational costs. In particular, backpropagation through implicit depth models requires solving a Jacobian-based equation arising from the implicit function theorem. We propose fixed point networks (FPNs), a simple setup for implicit depth learning that guarantees convergence of forward propagation to a unique limit defined by network weights and input data. Our key contribution is to provide a new Jacobian-free backpropagation (JFB) scheme that circumvents the need to solve Jacobian-based equations while maintaining fixed memory costs. This makes FPNs much cheaper to train and easy to implement. Our numerical examples yield state of the art classification results for implicit depth models and outperform corresponding explicit models. 2) <|cite_end|>improves the efficiency by making the backward propagation Jacobian free. Another issue is that \textit{infinite} depth and fixed point may not be necessary or even too strict for some tasks.
Instead of achieving infinite depth, our model shares parameters to a certain level. We empirically compare with equilibrium models in Section \ref{sec:exp}.
\textbf{Efficient Network Space and Matrix Factorization.}
Convolution is an efficient and structured matrix-vector multiplication. Arguably, the most fundamental idea in building efficient linear systems is matrix factorization. Given the redundancy in deep convolutional neural network parameters, one can leverage the matrix factorization concept, e.g., factorized convolutions, and design more efficient network classes | [
"<|reference_start|> SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size: Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet). \nThe SqueezeNet architecture is available for download here: this https URL <|reference_end|>",
"<|reference_start|> A Simple Weight Decay Can Improve Generalization: It has been observed in numerical simulations that a weight decay can improve generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk. <|reference_end|>",
"<|reference_start|> Convolutional Pose Machines: Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets. <|reference_end|>",
"<|reference_start|> Multiscale Deep Equilibrium Models: We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ), suited to large-scale and highly hierarchical pattern recognition domains. An MDEQ directly solves for and backpropagates through the equilibrium points of multiple feature resolutions simultaneously, using implicit differentiation to avoid storing intermediate states (and thus requiring only $O(1)$ memory consumption). These simultaneously-learned multi-resolution features allow us to train a single model on a diverse set of tasks and loss functions, such as using a single MDEQ to perform both image classification and semantic segmentation. We illustrate the effectiveness of this approach on two large-scale vision tasks: ImageNet classification and semantic segmentation on high-resolution images from the Cityscapes dataset. In both settings, MDEQs are able to match or exceed the performance of recent competitive computer vision models: the first time such performance and scale have been achieved by an implicit deep learning approach. The code and pre-trained models are at https://github.com/locuslab/mdeq . <|reference_end|>"
] | [
7,
35,
57,
64
] | {"<|cite_1|>": "ss-832115", "<|cite_2|>": "arxiv-299755", "<|multi_cite_3_1|>": "ss-1117443", "<|multi_cite_3_2|>": "arxiv-84906", "<|multi_cite_3_3|>": "arxiv-175999", "<|multi_cite_3_4|>": "arxiv-151068", "<|multi_cite_4_1|>": "arxiv-121831", "<|multi_cite_4_2|>": "ss-717959", "<|multi_cite_4_3|>": "arxiv-206505", "<|multi_cite_4_4|>": "arxiv-145365", "<|multi_cite_5_1|>": "ss-1137825", "<|multi_cite_5_2|>": "arxiv-236586", "<|multi_cite_5_3|>": "ss-771393", "<|multi_cite_6_1|>": "arxiv-106381", "<|multi_cite_6_2|>": "arxiv-94105", "<|multi_cite_6_3|>": "arxiv-174991", "<|multi_cite_7_1|>": "arxiv-109401", "<|multi_cite_7_2|>": "arxiv-182794", "<|multi_cite_7_3|>": "arxiv-258903", "<|multi_cite_8_1|>": "arxiv-221689", "<|multi_cite_8_2|>": "arxiv-272066", "<|multi_cite_8_3|>": "arxiv-91290", "<|multi_cite_9_1|>": "arxiv-205117", "<|multi_cite_9_2|>": "arxiv-298938", "<|multi_cite_10_1|>": "ss-1565974", "<|multi_cite_10_2|>": "ss-1117443", "<|multi_cite_11_1|>": "arxiv-84906", "<|multi_cite_11_2|>": "arxiv-252302", "<|cite_12|>": "arxiv-191574", "<|multi_cite_13_1|>": "arxiv-185331", "<|multi_cite_13_2|>": "ss-832115", "<|multi_cite_14_1|>": "arxiv-106381", "<|multi_cite_14_2|>": "arxiv-94105", "<|multi_cite_14_3|>": "arxiv-174991", "<|cite_15|>": "arxiv-326925", "<|cite_16|>": "ss-1201783", "<|multi_cite_17_1|>": "ss-771393", "<|multi_cite_17_2|>": "ss-1137825", "<|cite_18|>": "arxiv-236586", "<|cite_19|>": "ss-909311", "<|cite_20|>": "ss-1076530", "<|multi_cite_21_1|>": "arxiv-174614", "<|multi_cite_21_2|>": "arxiv-247493", "<|cite_22|>": "ss-2029582", "<|cite_23|>": "ss-1200465", "<|cite_24|>": "ss-1214636", "<|cite_25|>": "arxiv-88870", "<|multi_cite_26_1|>": "arxiv-66180", "<|multi_cite_26_2|>": "arxiv-77124", "<|multi_cite_27_1|>": "ss-1375485", "<|multi_cite_27_2|>": "ss-711031", "<|multi_cite_27_3|>": "arxiv-35267", "<|multi_cite_28_1|>": "arxiv-35267", "<|multi_cite_28_2|>": "arxiv-79343", "<|multi_cite_28_3|>": "arxiv-69800", "<|multi_cite_29_1|>": "arxiv-62640", "<|multi_cite_29_2|>": "ss-1256362", "<|multi_cite_30_1|>": "arxiv-91290", "<|multi_cite_30_2|>": "arxiv-81470", "<|multi_cite_31_1|>": "arxiv-88089", "<|multi_cite_31_2|>": "arxiv-113503", "<|multi_cite_32_1|>": "ss-1001179", "<|multi_cite_32_2|>": "arxiv-221689", "<|cite_33|>": "arxiv-221689", "<|multi_cite_34_1|>": "arxiv-272066", "<|multi_cite_34_2|>": "arxiv-312134", "<|cite_35|>": "ss-1406972", "<|multi_cite_36_1|>": "arxiv-121831", "<|multi_cite_36_2|>": "ss-717959", "<|multi_cite_36_3|>": "arxiv-206505", "<|multi_cite_36_4|>": "arxiv-145365"} |
2206.01626 | <|paper_start|> Title: Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress
Abstract: Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress: Learning tabula rasa, that is without any prior knowledge, is the prevalent workflow in reinforcement learning (RL) research. However, RL systems, when applied to large-scale settings, rarely operate tabula rasa. Such large-scale systems undergo multiple design or algorithmic changes during their development cycle and use ad hoc approaches for incorporating these changes without re-training from scratch, which would have been prohibitively expensive. Additionally, the inefficiency of deep RL typically excludes researchers without access to industrial-scale resources from tackling computationally-demanding problems. To address these issues, we present reincarnating RL as an alternative workflow or class of problem settings, where prior computational work (e.g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another. As a step towards enabling reincarnating RL from any agent to any other agent, we focus on the specific setting of efficiently transferring an existing sub-optimal policy to a standalone value-based RL agent. We find that existing approaches fail in this setting and propose a simple algorithm to address their limitations. Equipped with this algorithm, we demonstrate reincarnating RL's gains over tabula rasa RL on Atari 2600 games, a challenging locomotion task, and the real-world problem of navigating stratospheric balloons. Overall, this work argues for an alternative approach to RL research, which we believe could significantly improve real-world RL adoption and help democratize it further. Open-sourced code and trained agents at https://agarwl.github.io/reincarnating_rl.
Introduction
\vspace{-0.25cm}
Reinforcement learning~(RL) is a general-purpose paradigm for making data-driven decisions.
Due to this generality, the prevailing trend in RL research is to learn systems that can operate efficiently \emph{tabula rasa}, that is without much learned knowledge including prior computational work such as offline datasets or learned policies.
However, tabula rasa RL systems are typically the exception rather than the norm for solving large-scale RL problems <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|> <|cite_start|> (Reference: Solving Rubik's Cube with a Robot Hand: We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik's cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: https://openai.com/blog/solving-rubiks-cube/) <|cite_end|> <|cite_start|> (Reference: Dota 2 with Large Scale Deep Reinforcement Learning: On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.) <|cite_end|> <|cite_start|> (Reference: Grandmaster level in StarCraft II using multi-agent reinforcement learning: ) <|cite_end|> <|cite_start|> (Reference: AW-Opt: Learning Robotic Skills with Imitation and Reinforcement at Scale: Robotic skills can be learned via imitation learning (IL) using user-provided demonstrations, or via reinforcement learning (RL) using large amountsof autonomously collected experience.Both methods have complementarystrengths and weaknesses: RL can reach a high level of performance, but requiresexploration, which can be very time consuming and unsafe; IL does not requireexploration, but only learns skills that are as good as the provided demonstrations.Can a single method combine the strengths of both approaches? A number ofprior methods have aimed to address this question, proposing a variety of tech-niques that integrate elements of IL and RL. However, scaling up such methodsto complex robotic skills that integrate diverse offline data and generalize mean-ingfully to real-world scenarios still presents a major challenge. In this paper, ouraim is to test the scalability of prior IL + RL algorithms and devise a system basedon detailed empirical experimentation that combines existing components in themost effective and scalable way. To that end, we present a series of experimentsaimed at understanding the implications of each design decision, so as to develop acombined approach that can utilize demonstrations and heterogeneous prior datato attain the best performance on a range of real-world and realistic simulatedrobotic problems. Our complete method, which we call AW-Opt, combines ele-ments of advantage-weighted regression [1, 2] and QT-Opt [3], providing a unifiedapproach for integrating demonstrations and offline data for robotic manipulation.Please see https://awopt.github.io for more details.) <|cite_end|>. Such large-scale RL systems often need to function for long periods of time and continually experience new data; restarting them from scratch may require weeks if not months of computation, and there may be billions of data points to re-process – this makes the tabula rasa approach impractical. For example, the system that plays Dota 2 at a human-like level <|cite_start|> (Reference: Dota 2 with Large Scale Deep Reinforcement Learning: On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.) <|cite_end|> underwent several months of RL training with continual changes~(\eg in model architecture, environment, \etc) during its development; this necessitated building upon the previously trained system after such changes to circumvent re-training from scratch, {which was done using \textbf{\emph{ad hoc}} approaches~(described in \Secref{sec:related}).
Current RL research also excludes the majority of researchers outside certain resource-rich labs from tackling complex problems, as doing so often incurs substantial computational and financial cost: AlphaStar <|cite_start|> (Reference: Grandmaster level in StarCraft II using multi-agent reinforcement learning: ) <|cite_end|>, which achieves grandmaster level in StarCraft, was trained using TPUs for more than a month and replicating it would cost several million dollars~(Appendix~\ref{app:alphastar}). Even the quintessential deep RL benchmark of training an agent on 50+ Atari games <|cite_start|> (Reference: The Arcade Learning Environment: An Evaluation Platform for General Agents: In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.) <|cite_end|>, with at least 5 runs, requires more than 1000 GPU days.
As deep RL research move towards more challenging problems, the computational barrier to entry in RL research is likely to further increase.
\begin{figure}[t]
\centering
\includegraphics[width=0.93\linewidth]{figures/main_reincarnation_annotated.pdf}
\vspace{-0.05cm}
\caption{ \textbf{A \name RL workflow on ALE}. The plots show IQM <|cite_start|> (Reference: Deep Reinforcement Learning at the Edge of the Statistical Precipice: Deep reinforcement learning (RL) algorithms are predominantly evaluated by comparing their relative performance on a large suite of tasks. Most published results on deep RL benchmarks compare point estimates of aggregate performance such as mean and median scores across tasks, ignoring the statistical uncertainty implied by the use of a finite number of training runs. Beginning with the Arcade Learning Environment (ALE), the shift towards computationally-demanding benchmarks has led to the practice of evaluating only a small number of runs per task, exacerbating the statistical uncertainty in point estimates. In this paper, we argue that reliable evaluation in the few run deep RL regime cannot ignore the uncertainty in results without running the risk of slowing down progress in the field. We illustrate this point using a case study on the Atari 100k benchmark, where we find substantial discrepancies between conclusions drawn from point estimates alone versus a more thorough statistical analysis. With the aim of increasing the field's confidence in reported results with a handful of runs, we advocate for reporting interval estimates of aggregate performance and propose performance profiles to account for the variability in results, as well as present more robust and efficient aggregate metrics, such as interquartile mean scores, to achieve small uncertainty in results. Using such statistical tools, we scrutinize performance evaluations of existing algorithms on other widely used RL benchmarks including the ALE, Procgen, and the DeepMind Control Suite, again revealing discrepancies in prior comparisons. Our findings call for a change in how we evaluate performance in deep RL, for which we present a more rigorous evaluation methodology, accompanied with an open-source library rliable, to prevent unreliable results from stagnating the field.) <|cite_end|> normalized scores over training, computed using 50 seeds, aggregated across 10 Atari games. The vertical separators correspond to loading network weights and replay buffer for fine-tuning while offline pre-training on replay buffer using \alg~(\Secref{sec:qdagger}) for reincarnation. Shaded regions show 95\% confidence intervals. We assign a score of 1 to DQN (Adam) trained for 400M frames and 0 to a random agent. \textbf{(Panel 1)} \emph{Tabula rasa} Nature DQN <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|> nearly converges in performance after training for 200M frames. \textbf{(Panel 2)} \textbf{Reincarnation via fine-tuning} Nature DQN with a reduced learning rate leads to 50\% higher IQM with only 1M additional frames~(leftmost point).
Furthermore, fine-tuning Nature DQN while switching from RMSProp to Adam matches the performance of DQN (Adam) trained from scratch for 400M frames, using only 20M frames. \textbf{(Panel 3)}. A modern ResNet~(Impala-CNN <|cite_start|> (Reference: IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures: In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach.) <|cite_end|>) with a better algorithm~(Rainbow <|cite_start|> (Reference: Rainbow: Combining Improvements in Deep Reinforcement Learning: The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.) <|cite_end|>) outperforms further fine-tuning $n$-step DQN. \textbf{Reincarnating Impala-CNN Rainbow from DQN}, outperforms tabula rasa Impala-CNN Rainbow throughout training and requires only 50M frames to nearly match its performance at 100M frames. See Section~\ref{sec:workflow_experiments}.}
\label{fig:reincarnation_wf}
\vspace{-0.5cm}
\end{figure}
To address both the computational and sample inefficiencies of tabula rasa RL, we present \emph{\name}RL~(RRL) as an alternative research workflow {or a class of problems to focus on}. RRL seeks to \emph{maximally leverage existing computational work, such as learned network weights and collected data}, to accelerate training across design iterations of an RL agent or when moving from one agent to another. In RRL, agents need not be trained tabula rasa, except for initial forays into new problems. For example, imagine a researcher who has trained an agent $\gA_1$ for a long time (\eg weeks), but now this or another researcher wants to experiment with better architectures or RL algorithms. While the tabula rasa workflow requires re-training another agent from scratch, \name RL provides the more viable option of transferring $\gA_1$ to another agent and training this agent further, or simply fine-tuning $\gA_1$~(\figref{fig:reincarnation_wf}).
As such, RRL can be viewed as an attempt to provide a formal foundation for the research workflow needed for real-world and large-scale RL models.
Reincarnating RL can democratize research by allowing the broader community to tackle larger-scale and complex RL problems without requiring excessive computational resources. As a consequence, RRL can also help avoid the risk of researchers overfitting to conclusions from small-scale RL problems. Furthermore, RRL can enable a benchmarking paradigm where researchers continually improve and update existing trained agents, {especially on problems where improving performance has real-world impact~(\eg balloon navigation <|cite_start|> (Reference: Autonomous navigation of stratospheric balloons using reinforcement learning: ) <|cite_end|>, chip design <|cite_start|> (Reference: A graph placement methodology for fast chip design: ) <|cite_end|>, tokamak control <|cite_start|> (Reference: Magnetic control of tokamak plasmas through deep reinforcement learning: ) <|cite_end|>)}. Furthermore, a common real-world RL use case will likely be in scenarios where prior computational work is available~(\eg existing deployed RL policies), making RRL important to study.
However, beyond some \emph{ad hoc} large-scale reincarnation efforts~(\Secref{sec:related}), the community has not focused much on studying \name RL as a research problem in its own right.
To this end, this work argues for developing general-purpose RRL approaches as opposed to \emph{ad hoc} solutions.
Different RRL problems can be instantiated depending on how the prior computational work is provided: logged datasets, learned policies, pretrained models, representations, \etc. As a step towards developing broadly applicable reincarnation approaches, we focus on the specific setting of \emph{policy-to-value} \name RL~(PVRL) for efficiently transferring a suboptimal teacher policy to a value-based RL student agent~(\Secref{sec:value_policy}). Since it is undesirable to maintain dependency on past teachers for successive reincarnations, we require a PVRL algorithm to ``wean'' off the teacher dependence as training progresses. We find that prior approaches, when evaluated for PVRL on the Arcade Learning Environment~(ALE) <|cite_start|> (Reference: The Arcade Learning Environment: An Evaluation Platform for General Agents: In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.) <|cite_end|>, either result in small improvements over the tabula rasa student or exhibit degradation when weaning off the teacher. To address these limitations, we introduce \alg, which combines Dagger <|cite_start|> (Reference: A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning: Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.) <|cite_end|> with $n$-step Q-learning, and outperforms prior approaches. Equipped with \alg, we demonstrate the sample and compute-efficiency gains of \name RL over tabula rasa RL, on ALE, a humanoid locomotion task and the simulated real-world problem of navigating stratospheric balloons <|cite_start|> (Reference: Autonomous navigation of stratospheric balloons using reinforcement learning: ) <|cite_end|>~(\Secref{sec:workflow_experiments}).
Finally, we discuss some considerations in RRL as well as
address reproducibility and generalizability concerns.
\vspace{-0.25cm} <|paper_end|> | [
"<|reference_start|> Mastering the game of Go with deep neural networks and tree search: <|reference_end|>",
"<|reference_start|> Dota 2 with Large Scale Deep Reinforcement Learning: On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task. <|reference_end|>",
"<|reference_start|> AW-Opt: Learning Robotic Skills with Imitation and Reinforcement at Scale: Robotic skills can be learned via imitation learning (IL) using user-provided demonstrations, or via reinforcement learning (RL) using large amountsof autonomously collected experience.Both methods have complementarystrengths and weaknesses: RL can reach a high level of performance, but requiresexploration, which can be very time consuming and unsafe; IL does not requireexploration, but only learns skills that are as good as the provided demonstrations.Can a single method combine the strengths of both approaches? A number ofprior methods have aimed to address this question, proposing a variety of tech-niques that integrate elements of IL and RL. However, scaling up such methodsto complex robotic skills that integrate diverse offline data and generalize mean-ingfully to real-world scenarios still presents a major challenge. In this paper, ouraim is to test the scalability of prior IL + RL algorithms and devise a system basedon detailed empirical experimentation that combines existing components in themost effective and scalable way. To that end, we present a series of experimentsaimed at understanding the implications of each design decision, so as to develop acombined approach that can utilize demonstrations and heterogeneous prior datato attain the best performance on a range of real-world and realistic simulatedrobotic problems. Our complete method, which we call AW-Opt, combines ele-ments of advantage-weighted regression [1, 2] and QT-Opt [3], providing a unifiedapproach for integrating demonstrations and offline data for robotic manipulation.Please see https://awopt.github.io for more details. <|reference_end|>",
"<|reference_start|> IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures: In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach. <|reference_end|>"
] | [
0,
2,
4,
10
] | {"<|multi_cite_1_1|>": "ss-805362", "<|multi_cite_1_2|>": "arxiv-229065", "<|multi_cite_1_3|>": "arxiv-239288", "<|multi_cite_1_4|>": "ss-679381", "<|multi_cite_1_5|>": "ss-2085171", "<|cite_2|>": "arxiv-239288", "<|cite_3|>": "ss-679381", "<|cite_4|>": "arxiv-34401", "<|cite_5|>": "arxiv-363752", "<|cite_6|>": "ss-749221", "<|cite_7|>": "arxiv-147318", "<|cite_8|>": "arxiv-136607", "<|cite_9|>": "ss-868674", "<|cite_10|>": "ss-1222137", "<|cite_11|>": "ss-737262", "<|cite_12|>": "arxiv-34401", "<|cite_13|>": "arxiv-17086", "<|cite_14|>": "ss-868674"} |
2110.14904-0 | <|paper_start|> Title: MERCURY: Accelerating DNN Training By Exploiting Input Similarity
Abstract: MERCURY: Accelerating DNN Training By Exploiting Input Similarity: Deep Neural Networks (DNN) are computationally intensive to train. It consists of a large number of multidimensional dot products between many weights and input vectors. However, there can be significant similarity among input vectors. If one input vector is similar to another, its computations with the weights are similar to those of the other and, therefore, can be skipped by reusing the already-computed results. We propose a novel scheme, called MERCURY, to exploit input similarity during DNN training in a hardware accelerator. MERCURY uses Random Projection with Quantization (RPQ) to convert an input vector to a bit sequence, called Signature. A cache (MCACHE) stores signatures of recent input vectors along with the computed results. If the Signature of a new input vector matches that of an already existing vector in the MCACHE, the two vectors are found to have similarities. Therefore, the already-computed result is reused for the new vector. To the best of our knowledge, MERCURY is the first work that exploits input similarity using RPQ for accelerating DNN training in hardware. The paper presents a detailed design, workflow, and implementation of the MERCURY. Our experimental evaluation with twelve different deep learning models shows that MERCURY saves a significant number of computations and speeds up the model training by an average of 1.97X with an accuracy similar to the baseline system.
Introduction
\label{sec:intro}
Deep Neural Networks (DNNs) have become ubiquitous in recent years.
They are used for diverse tasks such as image and video recognition, recommendation systems,
natural language processing, etc. <|cite_start|> (Reference: A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects: Convolutional Neural Network (CNN) is one of the most significant networks in the deep learning field. Since CNN made impressive achievements in many areas, including but not limited to computer vision and natural language processing, it attracted much attention both of industry and academia in the past few years. The existing reviews mainly focus on the applications of CNN in different scenarios without considering CNN from a general perspective, and some novel ideas proposed recently are not covered. In this review, we aim to provide novel ideas and prospects in this fast-growing field as much as possible. Besides, not only two-dimensional convolution but also one-dimensional and multi-dimensional ones are involved. First, this review starts with a brief introduction to the history of CNN. Second, we provide an overview of CNN. Third, classic and advanced CNN models are introduced, especially those key points making them reach state-of-the-art results. Fourth, through experimental analysis, we draw some conclusions and provide several rules of thumb for function selection. Fifth, the applications of one-dimensional, two-dimensional, and multi-dimensional convolution are covered. Finally, some open issues and promising directions for CNN are discussed to serve as guidelines for future work.) <|cite_end|> <|cite_start|> (Reference: Recent Advances in Convolutional Neural Networks: In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Leveraging on the rapid growth in the amount of the annotated data and the great improvements in the strengths of graphics processor units, the research on convolutional neural networks has been emerged swiftly and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. We detailize the improvements of CNN on different aspects, including layer design, activation function, loss function, regularization, optimization and fast computation. Besides, we also introduce various applications of convolutional neural networks in computer vision, speech and natural language processing.) <|cite_end|> <|cite_start|> (Reference: Application of Face Recognition Based on CNN in Fatigue Driving Detection: Fatigue driving detection technology based on the external characteristics of the driver has made some progress in many aspects, but the method of driver facial feature extraction needs to be further improved, and the driver's eye location takes a long time, which affects the system recognition rate. The paper uses the fatigue driving detection method to achieve better results than the traditional detection method. The authors applied the convolutional neural network to face recognition, and improve the pupil localization algorithm, effectively overcoming the problem of large calculation of the original algorithm. According to the characteristics of driver's eyes with different width and height ratios in different states, a simple and feasible method of eye state judgment is realized, and the driver's fatigue state is judged by PERCLOS algorithm. The convolutional neural network model is applied to ORL face database, and the face recognition rate is 85%. The improved Hough transform method has a positioning accuracy of 92% for the driver's eyes, respectively, and the recognition rate for the driver's eye state is 83.9%. The authors designed the prototype system of fatigue driving detection based on face recognition which realizes the functions of driver's face feature detection, eye location, eye state judgment and fatigue judgment. The experimental results show that the recognition rate of fatigue is 87.5%.) <|cite_end|>. Due to the versatility of DNN models, special hardware accelerators have been proposed and built <|cite_start|> (Reference: DianNao: a Small-footprint High-throughput Accelerator for Ubiquitous Machine-Learning: Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve towards heterogeneous multi-cores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope. Until now, most machine-learning accelerator designs have focused on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy. We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02 mm2 and 485 mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87x faster, and it can reduce the total energy by 21.08x. The accelerator characteristics are obtained after layout at 65 nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.) <|cite_end|> <|cite_start|> (Reference: 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture: ) <|cite_end|> <|cite_start|> (Reference: Pudiannao: A Polyvalent Machine Learning Accelerator: Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.) <|cite_end|> <|cite_start|> (Reference: 49th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2016, Taipei, Taiwan, October 15-19, 2016: ) <|cite_end|>.
DNNs are computation intensive. For example, Convolutional Neural Network requires
30k to 600k operations per pixel. The computation volume is even higher when the accelerator trains a DNN model. However, inputs used during training often have similarities.
Our objective
is to improve the computational efficiency of DNN training by exploiting such similarities.
\subsection{Computations with Input Similarity}
DNN operations consist of numerous multidimensional dot products between weight and input vectors extracted from the weight and input matrices. Let us consider a weight vector $\textbf{w}$ and two input vectors $\textbf{v}_1=[v_{1,1}, v_{1,2}, v_{1,3}]$ and $\textbf{v}_2=[v_{1,1}+\epsilon_1, v_{1,2}+\epsilon_2, v_{1,3}+\epsilon_3]$. If $\epsilon_i$ (for $1\le i\le 3$) represents an insignificant difference, then $\textbf{v}_1$ and $\textbf{v}_2$ have value similarity. The dot product of $\textbf{v}_2$ and $\textbf{w}$ would be
$\textbf{v}_2\textbf{.w}=\textbf{v}_1\textbf{.w}+{\bf \epsilon.w}$. If $\epsilon_i\approx0$, then ${\bf\epsilon.w}\approx0$, and therefore, $\textbf{v}_2\textbf{.w}\approx\textbf{v}_1\textbf{.w}$. In other words, if $\textbf{v}_2$ and $\textbf{v}_1$ have value similarity,
the computation of $\textbf{v}_2$ with a weight vector is considerably similar to that of $\textbf{v}_1$ and, therefore, can be skipped by reusing the results of
$\textbf{v}_1$.
To further motivate the readers, we analyzed the VGG13 network <|cite_start|> (Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.) <|cite_end|>with ten convolution layers. We counted what fraction of input and gradient vectors have similarities in the convolution layers. The similarity is detected using a well-established technique called Random Projection with Quantization (RPQ) <|cite_start|> (Reference: Random projection in dimensionality reduction: applications to image and text data: Random projections have recently emerged as a powerful method for dimensionality reduction. Theoretical results indicate that the method preserves distances quite nicely; however, empirical results are sparse. We present experimental results on using random projection as a dimensionality reduction tool in a number of cases, where the high dimensionality of the data would otherwise lead to burden-some computations. Our application areas are the processing of both noisy and noiseless images, and information retrieval in text documents. We show that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis: the similarity of data vectors is preserved well under random projection. However, using random projections is computationally significantly less expensive than using, e.g., principal component analysis. We also show experimentally that using a sparse random matrix gives additional computational savings in random projection.) <|cite_end|>(more details in $\S$~\ref{sec-rpq}). The similarity in input vectors leads to computation reuse in the forward propagation, while that of gradient vectors leads to computation reuse in the backward propagation. Figure~\ref{sim-in-grad} shows that
VGG13 has up to $75\%$ similarity among input vectors and
up to $67\%$ similarity among gradient vectors. By capitalizing on these similarities, \scheme\ speeds up the VGG13 training by $1.89\times$ compared to baseline ($\S$~\ref{ovr-analysis}).
\begin{figure}[h]
\centering
\begin{subfigure}{0.4\columnwidth}
\centering
\vspace{-0.35cm}
\includegraphics[width=\columnwidth]{./FIGS/case-input-sim.pdf}
\vspace{-0.6cm}
\caption{Input vector.}
\label{case-input-sim}
\end{subfigure}
\begin{subfigure}{0.4\columnwidth}
\centering
\vspace{-0.35cm}
\includegraphics[width=\columnwidth]{./FIGS/case-gradient-sim.pdf}
\vspace{-0.6cm}
\caption{Gradient vector}
\label{case-gradient-sim}
\end{subfigure}
\vspace{-0.05cm}
\caption{Similarity among input and gradient vectors of VGG13.}
\vspace{-0.4cm}
\label{sim-in-grad}
\end{figure}
\subsection{State of the Art}
\label{sec-limit}
When it comes to DNN inference acceleration, the two dominant techniques are sparsity exploitation <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|> <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|> <|cite_start|> (Reference: Bit-tactical: A software/hardware approach to exploiting value and bit sparsity in neural networks: Weight and activation sparsity can be leveraged in hardware to boost the performance and energy efficiency of Deep Neural Networks during inference. Fully capitalizing on sparsity requires re-scheduling and mapping the execution stream to deliver non-zero weight/activation pairs to multiplier units for maximal utilization and reuse. However, permitting arbitrary value re-scheduling in memory space and in time places a considerable burden on hardware to perform dynamic at-runtime routing and matching of values, and incurs significant energy inefficiencies. Bit-Tactical (TCL) is a neural network accelerator where the responsibility for exploiting weight sparsity is shared between a novel static scheduling middleware, and a co-designed hardware front-end with a lightweight sparse shuffling network comprising two (2- to 8-input) multiplexers per activation input. We empirically motivate two back-end designs chosen to target bit-sparsity in activations, rather than value-sparsity, with two benefits: a) we avoid handling the dynamically sparse whole-value activation stream, and b) we uncover more ineffectual work. TCL outperforms other state-of-the-art accelerators that target sparsity for weights and activations, the dynamic precision requirements of activations, or their bit-level sparsity for a variety of neural networks.) <|cite_end|> <|cite_start|> (Reference: Sparten: A sparse tensor accelerator for convolutional neural networks: Convolutional neural networks (CNNs) are emerging as powerful tools for image processing. Recent machine learning work has reduced CNNs' compute and data volumes by exploiting the naturally-occurring and actively-transformed zeros in the feature maps and filters. While previous semi-sparse architectures exploit one-sided sparsity either in the feature maps or the filters, but not both, a recent fully-sparse architecture, called Sparse CNN (SCNN), exploits two-sided sparsity to improve performance and energy over dense architectures. However, sparse vector-vector dot product, a key primitive in sparse CNNs, would be inefficient using the representation adopted by SCNN. The dot product requires finding and accessing non-zero elements in matching positions in the two sparse vectors -- an inner join using the position as the key with a single value field. SCNN avoids the inner join by performing a Cartesian product capturing the relevant multiplications. However, SCNN's approach incurs several considerable overheads and is not applicable to non-unit-stride convolutions. Further, exploiting reuse in sparse CNNs fundamentally causes systematic load imbalance not addressed by SCNN. We propose SparTen which achieves efficient inner join by providing support for native two-sided sparse execution and memory storage. To tackle load imbalance, SparTen employs a software scheme, called greedy balancing, which groups filters by density via two variants, a software-only one which uses whole-filter density and a software-hardware hybrid which uses finer-grain density. Our simulations show that, on average, SparTen performs 4.7x, 1.8x, and 3x better than a dense architecture, one-sided sparse architecture, and SCNN, respectively. An FPGA implementation shows that SparTen performs 4.3x and 1.9x better than a dense architecture and a one-sided sparse architecture, respectively.) <|cite_end|>and computational reuse <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|> <|cite_start|> (Reference: SumMerge: An Efficient Algorithm and Implementation for Weight Repetition-Aware DNN Inference: Deep Neural Network (DNN) inference efficiency is a key concern across the myriad of domains now relying on Deep Learning. A recent promising direction to speed-up inference is to exploit \emph{weight repetition}. The key observation is that due to DNN quantization schemes---which attempt to reduce DNN storage requirements by reducing the number of bits needed to represent each weight---the same weight is bound to repeat many times within and across filters. This enables a weight-repetition aware inference kernel to factorize and memoize out common sub-computations, reducing arithmetic per inference while still maintaining the compression benefits of quantization. Yet, significant challenges remain. For instance, weight repetition introduces significant irregularity in the inference operation and hence (up to this point) has required custom hardware accelerators to derive net benefit. This paper proposes SumMerge: a new algorithm and set of implementation techniques to make weight repetition practical on general-purpose devices such as CPUs. The key idea is to formulate inference as traversing a sequence of data-flow graphs \emph{with weight-dependent structure}. We develop an offline heuristic to select a data-flow graph structure that minimizes arithmetic operations per inference (given trained weight values) and use an efficient online procedure to traverse each data-flow graph and compute the inference result given DNN inputs. We implement the above as an optimized C++ routine that runs on a commercial multicore processor with vector extensions and evaluate performance relative to Intel's optimized library oneDNN and the prior-art weight repetition algorithm (AGR). When applied on top of six different quantization schemes, SumMerge achieves a speedup of between 1.09x-2.05x and 1.04x-1.51x relative to oneDNN and AGR, respectively, while simultaneously compressing the DNN model by 8.7x to 15.4x.) <|cite_end|> <|cite_start|> (Reference: International Conference on Learning Representations (ICLR): ) <|cite_end|> <|cite_start|> (Reference: Trained Ternary Quantization: Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difficult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means it's as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16x smaller than full-precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%.) <|cite_end|> <|cite_start|> (Reference: 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018, Fukuoka, Japan, October 20-24, 2018: ) <|cite_end|> <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|>. Unfortunately, applying these techniques directly during training has never been easy <|cite_start|> (Reference: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO): ) <|cite_end|> <|cite_start|> (Reference: Dynamic Sparse Graph for Efficient Deep Learning: We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimension-reduction search (DRS) and obtains the BN compatibility via a double-mask selection (DMS). Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.) <|cite_end|>. Thus, most of the efforts for reducing DNN training time focus on alternative approaches, such as distributed training <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>, data compression <|cite_start|> (Reference: Echo: Compiler-based GPU Memory Footprint Reduction for LSTM RNN Training: The Long-Short-Term-Memory Recurrent Neural Networks (LSTM RNNs) are a popular class of machine learning models for analyzing sequential data. Their training on modern GPUs, however, is limited by the GPU memory capacity. Our profiling results of the LSTM RNN-based Neural Machine Translation (NMT) model reveal that feature maps of the attention and RNN layers form the memory bottleneck and runtime is unevenly distributed across different layers when training on GPUs. Based on these two observations, we propose to recompute the feature maps rather than stashing them persistently in the GPU memory. While the idea of feature map recomputation has been considered before, existing solutions fail to deliver satisfactory footprint reduction, as they do not address two key challenges. For each feature map recomputation to be effective and efficient, its effect on (1) the total memory footprint, and (2) the total execution time has to be carefully estimated. To this end, we propose *Echo*, a new compiler-based optimization scheme that addresses the first challenge with a practical mechanism that estimates the memory benefits of recomputation over the entire computation graph, and the second challenge by non-conservatively estimating the recomputation overhead leveraging layer specifics. *Echo* reduces the GPU memory footprint automatically and transparently without any changes required to the training source code, and is effective for models beyond LSTM RNNs. We evaluate *Echo* on numerous state-of-the-art machine learning workloads on real systems with modern GPUs and observe footprint reduction ratios of 1.89X on average and 3.13X maximum. Such reduction can be converted into faster training with a larger batch size, savings in GPU energy consumption (e.g., training with one GPU as fast as with four), and/or an increase in the maximum number of layers under the same GPU memory budget.) <|cite_end|> <|cite_start|> (Reference: Gist: Efficient Data Encoding for Deep Neural Network Training: Modern deep neural networks (DNNs) training typically relies on GPUs to train complex hundred-layer deep networks. A significant problem facing both researchers and industry practitioners is that, as the networks get deeper, the available GPU main memory becomes a primary bottleneck, limiting the size of networks it can train. In this paper, we investigate widely used DNNs and find that the major contributors to memory footprint are intermediate layer outputs (feature maps). We then introduce a framework for DNN-layer-specific optimizations (e.g., convolution, ReLU, pool) that significantly reduce this source of main memory pressure on GPUs. We find that a feature map typically has two uses that are spread far apart temporally. Our key approach is to store an encoded representation of feature maps for this temporal gap and decode this data for use in the backward pass; the full-fidelity feature maps are used in the forward pass and relinquished immediately. Based on this approach, we present Gist, our system that employs two classes of layer-specific encoding schemes – lossless and lossy – to exploit existing value redundancy in DNN training to significantly reduce the memory consumption of targeted feature maps. For example, one insight is by taking advantage of the computational nature of back propagation from pool to ReLU layer, we can store the intermediate feature map using just 1 bit instead of 32 bits per value. We deploy these mechanisms in a state-of-the-art DNN framework (CNTK) and observe that Gist reduces the memory footprint to upto 2x across 5 state-of-the-art image classification DNNs, with an average of 1.8x with only 4% performance overhead. We also show that further software (e.g., CuDNN) and hardware (e.g., dynamic allocation) optimizations can result in even larger footprint reduction (upto 4.1x).) <|cite_end|> <|cite_start|> (Reference: JPEG-ACT: Accelerating Deep Learning via Transform-Based Lossy Compression: A reduction in the time it takes to train machine learning models can be translated into improvements in accuracy. An important factor that increases training time in deep neural networks (DNNs) is the need to store large amounts of temporary data during the back-propagation algorithm. To enable training very large models this temporary data can be offloaded from limited size GPU memory to CPU memory but this data movement incurs large performance overheads.We observe that in one important class of DNNs, convolutional neural networks (CNNs), there is spatial correlation in these temporary values. We propose JPEG for ACTivations (JPEGACT), a lossy activation offload accelerator for training CNNs that works by discarding redundant spatial information. JPEGACT adapts the well-known JPEG algorithm from 2D image compression to activation compression. We show how to optimize the JPEG algorithm so as to ensure convergence and maintain accuracy during training. JPEG-ACT achieves $2.4\times$ higher training performance compared to prior offload accelerators, and $1.6\times$ compared to prior activation compression methods. An efficient hardware implementation allows JPEG-ACT to consume less than 1% of the power and area of a modern GPU.) <|cite_end|>, low precision training <|cite_start|> (Reference: 会議報告「The Thirty-first Annual Conference on Neural Information Processing Systems(NIPS 2017)」: 1.NIPS概要 NIPSは米国ユタ州で 1986年に行われた Snowbird ワークショップを契機に 1987年より開催されている機 械学習に関する国際会議で,この分野では International Conference on Machine Learning(ICML)と並び最難 関会議と位置付けられている.機械学習はデータからの 学習可能性を探る分野であり,アルゴリズムや手法開発 に主眼が置かれる.データマイニング,自然言語処理, 画像認識,音響処理などの関連分野との関わりが深い. また,データからの学習というパラダイムは多くの産業 分野に影響を与え得ると考えられており,近年大きな注 目が集まっている. 1987~ 2013年までの NIPSはスキーリゾートで開 催されてきたが,近年は機械学習分野の隆盛と大規模化 に伴い,2014年以降はより都市部の大規模展示場で開 催されるようになった.NIPSの開催されてきた期間は 現在の機械学習手法の基盤である統計的機械学習が確立 した期間であり,NIPSの歴史はそのまま機械学習の歩 んできた歴史といってもよいであろう. NIPS 2017は米国カリフォルニア州ロングビーチで 12月 4~ 9日まで開催され,初日がチュートリアル,2 ~ 4日目が本会議,5~ 6日目がワークショップであっ た.ロングビーチはロサンゼルス近くに位置し,12月 にもかかわらず初夏のような天気であった.今年の参加 登録者は 8 000人近くに達し,2013年から見て 4倍の 増加になった.投稿・採択論文数も 3 240件・679件と なり,過去最大規模での開催となった.NIPSは 84社, 合計 1,760,000ドルにのぼるスポンサー収入を得てお り,大規模なスポンサーブースによる展示,リクルート 活動が行われていた.NIPSを含む機械学習の多くの学 会はダブルブラインド(投稿者,査読者の名前が査読中 明らかにならない)であるが,ここ 1~ 2年は学会締切 とともに arxiv.orgなどのプレプリントサーバに論文を 投稿するという動きが活発化しており,ダブルブライン ド性の基盤が危うくなっている.また,大規模ニューラ ルネットワークを用いた深層学習(ディープラーニング) の研究がここ数年急増しているが,これは 2010年頃に 画像認識,音声認識などのパターン認識分野でニューラ ルネットによるイノベーションがあったことが背景であ る.ニューラルネットはカーネルマシン(サポートベク タマシンなど)と異なり,モデルの学習についてまだわ からないことが多い.特に,ニューラルネットがなぜ実 データをうまく学習できるかのメカニズムの解明は近年 最もホットなトピックの一つである.その他,特記す べき事項としては,ゲーム AIでの成功で注目される米 DeepMind社など,企業研究者の存在感が去年に増して 大きかった点もあげられる. 学会発表の動画は) <|cite_end|> <|cite_start|> (Reference: Mixed Precision Training of Convolutional Neural Networks using Integer Operations: The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular, FP16 accumulating into FP32 Micikevicius et al. (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on the ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half-precision) <|cite_end|> <|cite_start|> (Reference: Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks: Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the neon deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.) <|cite_end|>. Recently, there is some work about exploiting sparsity in DNN training <|cite_start|> (Reference: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO): ) <|cite_end|> <|cite_start|> (Reference: Dynamic Sparse Graph for Efficient Deep Learning: We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimension-reduction search (DRS) and obtains the BN compatibility via a double-mask selection (DMS). Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.) <|cite_end|> <|cite_start|> (Reference: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO): ) <|cite_end|>. Yet the challenges of reusing similar computational data due to input similarity during training are not well-studied. Most of the current work in this category is either software based <|cite_start|> (Reference: 2019 IEEE 35th International Conference on Data Engineering (ICDE): ) <|cite_end|> <|cite_start|> (Reference: Deep reuse: streamline CNN inference on the fly via coarse-grained computation reuse: This paper presents deep reuse, a method for speeding up CNN inferences by detecting and exploiting deep reusable computations on the fly. It empirically reveals the massive similarities among neuron vectors in activation maps, both within CNN inferences on an input and across inputs. It gives an in-depth study on how to effectively turn the similarities into beneficial computation reuse to speed up CNN inferences. The investigation covers various factors, ranging from the clustering methods for similarity detection, to clustering scopes, similarity metrics, and neuron vector granularities. The insights help create deep reuse. As an on-line method, deep reuse is easy to apply, and adapts to each CNN (compressed or not) and its input. Using no special hardware support or CNN model changes, this method speeds up inferences by 1.77--2X (up to 4.3X layer-wise) on the fly with virtually no () <|cite_end|>or limited to
inference only <|cite_start|> (Reference: Deep reuse: streamline CNN inference on the fly via coarse-grained computation reuse: This paper presents deep reuse, a method for speeding up CNN inferences by detecting and exploiting deep reusable computations on the fly. It empirically reveals the massive similarities among neuron vectors in activation maps, both within CNN inferences on an input and across inputs. It gives an in-depth study on how to effectively turn the similarities into beneficial computation reuse to speed up CNN inferences. The investigation covers various factors, ranging from the clustering methods for similarity detection, to clustering scopes, similarity metrics, and neuron vector granularities. The insights help create deep reuse. As an on-line method, deep reuse is easy to apply, and adapts to each CNN (compressed or not) and its input. Using no special hardware support or CNN model changes, this method speeds up inferences by 1.77--2X (up to 4.3X layer-wise) on the fly with virtually no () <|cite_end|> <|cite_start|> (Reference: Energy Efficient Boosting of GEMM Accelerators for DNN via Reuse: Reuse-centric convolutional neural networks (CNN) acceleration speeds up CNN inference by reusing computations for similar neuron vectors in CNN’s input layer or activation maps. This new paradigm of optimizations is, however, largely limited by the overheads in neuron vector similarity detection, an important step in reuse-centric CNN. This article presents an in-depth exploration of architectural support for reuse-centric CNN. It addresses some major limitations of the state-of-the-art design and proposes a novel hardware accelerator that improves neuron vector similarity detection and reduces the energy consumption of reuse-centric CNN inference. The accelerator is implemented to support a wide variety of neural network settings with a banked memory subsystem. Design exploration is performed through RTL simulation and synthesis on an FPGA platform. When integrated into Eyeriss, the accelerator can potentially provide improvements up to 7.75 \( \times \) in performance. Furthermore, it can reduce the energy used for similarity detection up to 95.46%, and it can accelerate the convolutional layer up to 3.63 \( \times \) compared to the software-based implementation running on the CPU.) <|cite_end|> <|cite_start|> (Reference: 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018, Fukuoka, Japan, October 20-24, 2018: ) <|cite_end|> <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|>. Extending them for training in an accelerator is difficult due to two major challenges.
\begin{compactitem}
\item {\em Similarity Detection:} Detecting similarity among inputs requires extra computations and hardware. Therefore, reducing the computations while reusing the existing hardware as much as possible becomes a major bottleneck for exploiting input similarity.
\item {\em Dataflow Modification:} When two inputs are similar, computations for one input can be reused for the other. This creates an irregularity in the dataflow of an accelerator. Changing the dataflow or using a new one will vanquish the benefit of the accelerator's original dataflow. Thus, addressing the irregularity in computations while maintaining the original accelerator dataflow becomes a significant objective for adopting input similarity.
\end{compactitem}
\subsection{Proposed Approach}
\label{sec-proposed-app}
We propose a novel scheme, called \scheme, to exploit input similarity during the training phase in a DNN accelerator.
\scheme\ uses RPQ <|cite_start|> (Reference: Random projection in dimensionality reduction: applications to image and text data: Random projections have recently emerged as a powerful method for dimensionality reduction. Theoretical results indicate that the method preserves distances quite nicely; however, empirical results are sparse. We present experimental results on using random projection as a dimensionality reduction tool in a number of cases, where the high dimensionality of the data would otherwise lead to burden-some computations. Our application areas are the processing of both noisy and noiseless images, and information retrieval in text documents. We show that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis: the similarity of data vectors is preserved well under random projection. However, using random projections is computationally significantly less expensive than using, e.g., principal component analysis. We also show experimentally that using a sparse random matrix gives additional computational savings in random projection.) <|cite_end|>in hardware to detect similarity among input vectors.
We show a formulation of RPQ where it follows the same computation pattern as a convolution operation. Therefore, \scheme\ reuses the existing hardware Processing Elements (PEs) to perform RPQ. \scheme\ uses RPQ to convert an input vector into a bit-sequence, called {\em Signature}.
\scheme\ calculates one signature for each input vector.
If two input vectors produce the same signature, they are significantly similar and thus, have higher similarity.
During the DNN operation between a weight and input vector, the input vector's signature is used to access a special cache, called \scache. \scache\ uses signatures to calculate indices and tags and (previously) computed results as data. If there is a hit on \scache, the computation is skipped. Instead, the computed result stored in the data-portion of the cache entry is reused. On the other hand, if there is a miss, the computation continues, and the result is stored into \scache. Input similarity introduces irregularity in the original computation pattern of a DNN accelerator by skipping some computations. \scheme\ adds a bitmap (called {\em Hitmap}) and some shared structures to make the dataflow and computations continuous and uninterrupted. The signatures produced during the forward propagation are stored in memory to be reused during the backward propagation of the training phase. Moreover, \scheme\ dynamically decides when and to what extent input similarity should be exploited based on its impact on performance and accuracy.
In summary, we make the following contributions:
\begin{compactenum}
\item \scheme\ is the {\em first} accelerator to exploit input similarity using RPQ for improving training performance.
We propose to adapt \scheme\ dynamically based on accuracy and performance impact.
\item We propose to use RPQ in hardware to detect similarity among input vectors dynamically. We show a novel formulation of RPQ where it follows the same computation pattern as a convolution operation. Therefore, \scheme\ can calculate RPQ-based signatures using the same hardware PEs and dataflow used for DNN operations. We show how signature calculation can be further pipelined.
\item Input similarity causes irregularity in the original computation pattern of an accelerator due to the reuse of computations. We propose to add a cache, \scache, along with a bitmap ({\em Hitmap}) and some shared structures to make the dataflow and computations continuous and regular.
\item We implemented \scheme\ in Virtex 7 FPGA board <|cite_start|> (Reference: Low power and area SHA-256 hardware accelerator on Virtex-7 FPGA: Lately, there have been many technological developments in communication especially in online transactions, so the demand for highly secure systems and cryptographic algorithms has increased. Cryptographic hash functions are used to protect and authenticate information and transactions. SHA-256 (Secure Hash Algorithm-256) is a one-way hash function characterized by being highly secure and fast while having a high collision resistance. This paper presents a new hardware architecture of SHA-256 with low power consumption and area based on a sequential computation of the message scheduler and the working variables of SHA-256. The hardware was described in HDL and implemented on Virtex-7 FPGA which offers high efficiency and speed. Different optimization techniques were used to further reduce the power and area such as gated clock conversion, arithmetic resource sharing, and structural modeling of small building blocks. The proposed design ran with a maximum frequency of 83.33 MHz. The implementation reports indicated a dynamic power consumption of 13 mW and area utilization of 275 slices while maintaining a good throughput of 0.637 Gbits/s and a relatively high efficiency of 2.32 Mbits/s per slice. Such design with low power and area can be used to hash messages on a portable device opening a whole new area for different applications and opportunities.) <|cite_end|>. We showed a scalable implementation of \scache\ to meet the demand of the \scheme. We evaluated \scheme\ using twelve DNN models (including a transformer model) with three different dataflows and achieved an average speedup of $1.97\times$ with an accuracy similar to the baseline system.
\end{compactenum}
As an example, Cnvultin tries to detect zero inputs to reduce unnecessary multiplications in the inference process, thus it is put in Inference/Input/Single Element. Deep Compression <|cite_start|> (Reference: International Conference on Learning Representations (ICLR): ) <|cite_end|>tries to quantize model weights to reduce the inference computational cost, therefore it is categorized as an Inference/Filter/Multiple Elements technique.
There are techniques that belong to multiple categories. For instance, SCNN exploits the sparsity in both weights and activations, therefore it falls into both Input and Filter classes.
\scheme\ falls under the category Training/Input/Multiple Elements. Although Ning et al. <|cite_start|> (Reference: 2019 IEEE 35th International Conference on Data Engineering (ICDE): ) <|cite_end|>is in the same category, its scope is limited to software
as opposed to a hardware accelerator (more on this in Section~\ref{sec-comp-reuse}).
\begin{table}[h!]
\centering
\scalebox{0.8}{
\begin{tabular}{|| c | c | l | c ||}
\hline\hline
Time & Data & Granularity & Examples \\ [0.5ex]
\hline\hline
Inference & Input & Single Element only &*\\
& & Multiple Elements & <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|> <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|> <|cite_start|> (Reference: Deep reuse: streamline CNN inference on the fly via coarse-grained computation reuse: This paper presents deep reuse, a method for speeding up CNN inferences by detecting and exploiting deep reusable computations on the fly. It empirically reveals the massive similarities among neuron vectors in activation maps, both within CNN inferences on an input and across inputs. It gives an in-depth study on how to effectively turn the similarities into beneficial computation reuse to speed up CNN inferences. The investigation covers various factors, ranging from the clustering methods for similarity detection, to clustering scopes, similarity metrics, and neuron vector granularities. The insights help create deep reuse. As an on-line method, deep reuse is easy to apply, and adapts to each CNN (compressed or not) and its input. Using no special hardware support or CNN model changes, this method speeds up inferences by 1.77--2X (up to 4.3X layer-wise) on the fly with virtually no () <|cite_end|>* <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|>\\[1ex]
& Filter & Single Element only &* <|cite_start|> (Reference: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO): ) <|cite_end|>* \\
& & Multiple Elements & <|cite_start|> (Reference: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): ) <|cite_end|> <|cite_start|> (Reference: International Conference on Learning Representations (ICLR): ) <|cite_end|> <|cite_start|> (Reference: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO): ) <|cite_end|>* \\[1ex]
\hline
Training & Input & Single Element only & None \\
& & Multiple Elements & {\bf \scheme}, <|cite_start|> (Reference: 2019 IEEE 35th International Conference on Data Engineering (ICDE): ) <|cite_end|>\\[1ex]
& Filter & Single Element only & <|cite_start|> (Reference: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO): ) <|cite_end|>* \\
& & Multiple Elements & <|cite_start|> (Reference: 会議報告「The Thirty-first Annual Conference on Neural Information Processing Systems(NIPS 2017)」: 1.NIPS概要 NIPSは米国ユタ州で 1986年に行われた Snowbird ワークショップを契機に 1987年より開催されている機 械学習に関する国際会議で,この分野では International Conference on Machine Learning(ICML)と並び最難 関会議と位置付けられている.機械学習はデータからの 学習可能性を探る分野であり,アルゴリズムや手法開発 に主眼が置かれる.データマイニング,自然言語処理, 画像認識,音響処理などの関連分野との関わりが深い. また,データからの学習というパラダイムは多くの産業 分野に影響を与え得ると考えられており,近年大きな注 目が集まっている. 1987~ 2013年までの NIPSはスキーリゾートで開 催されてきたが,近年は機械学習分野の隆盛と大規模化 に伴い,2014年以降はより都市部の大規模展示場で開 催されるようになった.NIPSの開催されてきた期間は 現在の機械学習手法の基盤である統計的機械学習が確立 した期間であり,NIPSの歴史はそのまま機械学習の歩 んできた歴史といってもよいであろう. NIPS 2017は米国カリフォルニア州ロングビーチで 12月 4~ 9日まで開催され,初日がチュートリアル,2 ~ 4日目が本会議,5~ 6日目がワークショップであっ た.ロングビーチはロサンゼルス近くに位置し,12月 にもかかわらず初夏のような天気であった.今年の参加 登録者は 8 000人近くに達し,2013年から見て 4倍の 増加になった.投稿・採択論文数も 3 240件・679件と なり,過去最大規模での開催となった.NIPSは 84社, 合計 1,760,000ドルにのぼるスポンサー収入を得てお り,大規模なスポンサーブースによる展示,リクルート 活動が行われていた.NIPSを含む機械学習の多くの学 会はダブルブラインド(投稿者,査読者の名前が査読中 明らかにならない)であるが,ここ 1~ 2年は学会締切 とともに arxiv.orgなどのプレプリントサーバに論文を 投稿するという動きが活発化しており,ダブルブライン ド性の基盤が危うくなっている.また,大規模ニューラ ルネットワークを用いた深層学習(ディープラーニング) の研究がここ数年急増しているが,これは 2010年頃に 画像認識,音声認識などのパターン認識分野でニューラ ルネットによるイノベーションがあったことが背景であ る.ニューラルネットはカーネルマシン(サポートベク タマシンなど)と異なり,モデルの学習についてまだわ からないことが多い.特に,ニューラルネットがなぜ実 データをうまく学習できるかのメカニズムの解明は近年 最もホットなトピックの一つである.その他,特記す べき事項としては,ゲーム AIでの成功で注目される米 DeepMind社など,企業研究者の存在感が去年に増して 大きかった点もあげられる. 学会発表の動画は) <|cite_end|> <|cite_start|> (Reference: 会議報告「The Thirty-first Annual Conference on Neural Information Processing Systems(NIPS 2017)」: 1.NIPS概要 NIPSは米国ユタ州で 1986年に行われた Snowbird ワークショップを契機に 1987年より開催されている機 械学習に関する国際会議で,この分野では International Conference on Machine Learning(ICML)と並び最難 関会議と位置付けられている.機械学習はデータからの 学習可能性を探る分野であり,アルゴリズムや手法開発 に主眼が置かれる.データマイニング,自然言語処理, 画像認識,音響処理などの関連分野との関わりが深い. また,データからの学習というパラダイムは多くの産業 分野に影響を与え得ると考えられており,近年大きな注 目が集まっている. 1987~ 2013年までの NIPSはスキーリゾートで開 催されてきたが,近年は機械学習分野の隆盛と大規模化 に伴い,2014年以降はより都市部の大規模展示場で開 催されるようになった.NIPSの開催されてきた期間は 現在の機械学習手法の基盤である統計的機械学習が確立 した期間であり,NIPSの歴史はそのまま機械学習の歩 んできた歴史といってもよいであろう. NIPS 2017は米国カリフォルニア州ロングビーチで 12月 4~ 9日まで開催され,初日がチュートリアル,2 ~ 4日目が本会議,5~ 6日目がワークショップであっ た.ロングビーチはロサンゼルス近くに位置し,12月 にもかかわらず初夏のような天気であった.今年の参加 登録者は 8 000人近くに達し,2013年から見て 4倍の 増加になった.投稿・採択論文数も 3 240件・679件と なり,過去最大規模での開催となった.NIPSは 84社, 合計 1,760,000ドルにのぼるスポンサー収入を得てお り,大規模なスポンサーブースによる展示,リクルート 活動が行われていた.NIPSを含む機械学習の多くの学 会はダブルブラインド(投稿者,査読者の名前が査読中 明らかにならない)であるが,ここ 1~ 2年は学会締切 とともに arxiv.orgなどのプレプリントサーバに論文を 投稿するという動きが活発化しており,ダブルブライン ド性の基盤が危うくなっている.また,大規模ニューラ ルネットワークを用いた深層学習(ディープラーニング) の研究がここ数年急増しているが,これは 2010年頃に 画像認識,音声認識などのパターン認識分野でニューラ ルネットによるイノベーションがあったことが背景であ る.ニューラルネットはカーネルマシン(サポートベク タマシンなど)と異なり,モデルの学習についてまだわ からないことが多い.特に,ニューラルネットがなぜ実 データをうまく学習できるかのメカニズムの解明は近年 最もホットなトピックの一つである.その他,特記す べき事項としては,ゲーム AIでの成功で注目される米 DeepMind社など,企業研究者の存在感が去年に増して 大きかった点もあげられる. 学会発表の動画は) <|cite_end|> <|cite_start|> (Reference: International Conference on Learning Representations (ICLR): ) <|cite_end|>\\[1ex]
\hline\hline
\end{tabular}}
\caption{Categories of computation optimization techniques. Any work with * belongs to multiple categories}
\label{table-taxonomy}
\vspace{-0.6cm}
\end{table}
Related Work
\label{sec-back}
\subsection{Random Projection with Quantization (RPQ)}
\label{sec-rpq}
Random Projection <|cite_start|> (Reference: Random projection in dimensionality reduction: applications to image and text data: Random projections have recently emerged as a powerful method for dimensionality reduction. Theoretical results indicate that the method preserves distances quite nicely; however, empirical results are sparse. We present experimental results on using random projection as a dimensionality reduction tool in a number of cases, where the high dimensionality of the data would otherwise lead to burden-some computations. Our application areas are the processing of both noisy and noiseless images, and information retrieval in text documents. We show that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis: the similarity of data vectors is preserved well under random projection. However, using random projections is computationally significantly less expensive than using, e.g., principal component analysis. We also show experimentally that using a sparse random matrix gives additional computational savings in random projection.) <|cite_end|>is a dimensionality reduction technique often used in similarity estimation of high-dimensional data (such as image and text). Given a vector, $\mathbf{X}$ of size $1\times m$, random projection works by multiplying $\mathbf{X}$ with a random matrix $\mathbf{R}$ of size $m\times n$. The elements of $\mathbf{R}$ are randomly populated
from a normal distribution, whose mean is 0 and variance is 1. The multiplication produces a projected vector $\mathbf{X}_p$ of size $1\times n$. Thus, random projection converts one vector
to another with a different dimension (often a lower one).
Random projection ensures that if two vectors are close (similar) in their original dimension, their projected vectors will also be close (with a Euclidean distance scaled accordingly) in the newer dimension. Elements of $\mathbf{X}_p$ can be quantized further. One such quantization approach is sign-based. So, if an element of $\mathbf{X}_p$ has a sign bit equal to 0, it is quantized to 0. Otherwise, it is quantized to 1. Thus, RPQ converts $\mathbf{X}$ into a bit sequence, called signature. Figure~\ref{fig-signature-cal} shows an example of how RPQ converts a vector into a signature. If RPQ converts two vectors, $\mathbf{X}_1$ and $\mathbf{X}_2$, into the same signature, their Euclidean distance in the new dimension is 0. Therefore, their distance in the original dimension is $\approx 0$. So, $\mathbf{X}_1\approx\mathbf{X}_2$.
\begin{figure}[htpb]
\centering
\vspace{-0.45cm}
\includegraphics[width=\columnwidth]{./FIGS/signature-3.pdf}
\vspace{-0.5cm}
\caption{An example of how RPQ converts a vector $\mathbf{X}$ into a projected vector $\mathbf{X}_p$ and eventually, a signature.}
\label{fig-signature-cal}
\vspace{-0.25cm}
\end{figure}
RPQ has been used in many domains such as learning <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>, compression <|cite_start|> (Reference: {Similarity Estimation Techniques from Rounding Algorithms: (MATH) A locality sensitive hashing scheme is a distribution on a family $\F$ of hash functions operating on a collection of objects, such that for two objects <i>x,y</i>, <b>Pr</b><sub><i>h</i></sub>εF[<i>h</i>(<i>x</i>) = <i>h</i>(<i>y</i>)] = sim(<i>x,y</i>), where <i>sim</i>(<i>x,y</i>) ε [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure <i>sim</i>(<i>A,B</i>) = \frac{|A &Pgr; B|}{|A &Pgr B|}.(MATH) We show that rounding algorithms for LPs and SDPs used in the context of approximation algorithms can be viewed as locality sensitive hashing schemes for several interesting collections of objects. Based on this insight, we construct new locality sensitive hashing schemes for:<ol><li>A collection of vectors with the distance between → \over <i>u</i> and → \over <i>v</i> measured by Ø(→ \over <i>u</i>, → \over <i>v</i>)/π, where Ø(→ \over <i>u</i>, → \over <i>v</i>) is the angle between → \over <i>u</i>) and → \over <i>v</i>). This yields a sketching scheme for estimating the cosine similarity measure between two vectors, as well as a simple alternative to minwise independent permutations for estimating set similarity.</li><li>A collection of distributions on <i>n</i> points in a metric space, with distance between distributions measured by the Earth Mover Distance (<b>EMD</b>), (a popular distance measure in graphics and vision). Our hash functions map distributions to points in the metric space such that, for distributions <i>P</i> and <i>Q</i>, <b>EMD</b>(<i>P,Q</i>) &xie; <b>E</b><sub>hε\F</sub> [<i>d</i>(<i>h</i>(<i>P</i>),<i>h</i>(<i>Q</i>))] &xie; <i>O</i>(log <i>n</i> log log <i>n</i>). <b>EMD</b>(<i>P, Q</i>).</li></ol>.) <|cite_end|>, etc. To provide insight into how RPQ behaves, we conducted an experiment with ten randomly generated unique vectors of dimension $10$. We generated ten more similar vectors from each of the vectors (by adding some random $\epsilon$ to each dimension).
We generate signatures of all vectors and compare them with each other to determine how many unique vectors we can find. Since we started with ten unique vectors, a comparison should find a similar number of unique vectors.
Figure~\ref{fig-rpq-func} shows the number of unique vectors found by RPQ. It also shows results with another technique, {\em Bloom Filter} <|cite_start|> (Reference: {Space/time trade-offs in hash coding with allowable errors: In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency.
The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods.
In such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to “catch” the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods.
Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.) <|cite_end|>, <|cite_start|> (Reference: Bulk Disambiguation of Speculative Threads in Multiprocessors: Transactional memory (TM), thread-level speculation (TLS), and checkpointed multiprocessors are three popular architectural techniques based on the execution of multiple, cooperating speculative threads. In these environments, correctly maintaining data dependences across threads requires mechanisms for disambiguating addresses across threads, invalidating stale cache state, and making committed state visible. These mechanisms are both conceptually involved and hard to implement. In this paper, we present bulk, a novel approach to simplify these mechanisms. The idea is to hash-encode a thread's access information in a concise signature, and then support in hardware signature operations that efficiently process sets of addresses. Such operations implement the mechanisms described. Bulk operations are inexact but correct, and provide substantial conceptual and implementation simplicity. We evaluate Bulk in the context of TLS using SPECint2000 codes and TM using multithreaded Java workloads. Despite its simplicity, Bulk has competitive performance with more complex schemes. We also find that signature configuration is a key design parameter) <|cite_end|>. For smaller signatures, both methods declare many dissimilar vectors as similar. However, RPQ is able to detect unique vectors better than Bloom Filters at longer signatures.
\begin{figure}[h]
\centering
\begin{subfigure}{0.4\columnwidth}
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{./FIGS/RPQ_Sim.png}
\vspace{-0.5cm}
\caption{RPQ}
\label{case-input-sim}
\end{subfigure}
\begin{subfigure}{0.4\columnwidth}
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{./FIGS/BloomFilter_Sim.png}
\vspace{-0.5cm}
\caption{Bloom Filter}
\label{case-gradient-sim}
\end{subfigure}
\vspace{-0.2cm}
\caption{Unique vectors found by a) RPQ b) Bloom Filter.}
\label{fig-rpq-func}
\vspace{-0.3cm}
\end{figure}
\subsection{DNN Accelerator and Dataflow}
\label{sec-baseline}
A typical DNN accelerator is shown in Figure~\ref{fig-base}. The accelerator has a number of hardware PEs.
Each PE has vertical and horizontal connections with neighboring PEs using on-chip networks. There is a global buffer to hold inputs, weights, and partial-sums. The chip is connected to off-chip memory to receive inputs and store outputs. Each PE contains registers to hold inputs, weights, and partial sums. Each PE also contains multiplier and adder units.
Each PE distributes inputs and weights and generates partial sums based on a dataflow.
\begin{figure}[htpb]
\centering
\vspace{-0.3cm}
\includegraphics[width=0.6\columnwidth]{./FIGS/baseline.pdf}
\caption{Baseline hardware accelerator.}
\label{fig-base}
\vspace{-0.3cm}
\end{figure}
Different dataflows have been proposed in literature <|cite_start|> (Reference: Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators: The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing dataflows in processing a state-of-the-art DNN, this work provides guidelines for future DNN accelerator designs.) <|cite_end|> <|cite_start|> (Reference: MAERI: enabling flexible dataflow mapping over DNN accelerators via reconfigurable interconnects: Deep neural networks (DNN) have demonstrated highly promising results across computer vision and speech recognition, and are becoming foundational for ubiquitous AI. The computational complexity of these algorithms and a need for high energy-efficiency has led to a surge in research on hardware accelerators. % for this paradigm. To reduce the latency and energy costs of accessing DRAM, most DNN accelerators are spatial in nature, with hundreds of processing elements (PE) operating in parallel and communicating with each other directly. DNNs are evolving at a rapid rate, and it is common to have convolution, recurrent, pooling, and fully-connected layers with varying input and filter sizes in the most recent topologies.They may be dense or sparse. They can also be partitioned in myriad ways (within and across layers) to exploit data reuse (weights and intermediate outputs). All of the above can lead to different dataflow patterns within the accelerator substrate. Unfortunately, most DNN accelerators support only fixed dataflow patterns internally as they perform a careful co-design of the PEs and the network-on-chip (NoC). In fact, the majority of them are only optimized for traffic within a convolutional layer. This makes it challenging to map arbitrary dataflows on the fabric efficiently, and can lead to underutilization of the available compute resources. DNN accelerators need to be programmable to enable mass deployment. For them to be programmable, they need to be configurable internally to support the various dataflow patterns that could be mapped over them. To address this need, we present MAERI, which is a DNN accelerator built with a set of modular and configurable building blocks that can easily support myriad DNN partitions and mappings by appropriately configuring tiny switches. MAERI provides 8-459% better utilization across multiple dataflow mappings over baselines with rigid NoC fabrics.) <|cite_end|>to optimize different aspects of the DNN operations. Examples are Weight-Stationary, Output-Stationary, Input-Stationary, and Row-Stationary.
The dataflow name often reflects which data is kept unchanged in the PE unit throughout the computation.
In Weight-Stationary, each PE statically holds a weight inside its register file. Those operations that use the same weight are mapped to the same PE unit <|cite_start|> (Reference: Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators: The authors demonstrate the key role dataflows play in the optimization of energy efficiency for deep neural network (DNN) accelerators. By introducing a systematic approach to analyze the problem and a new dataflow, called Row-Stationary, which is up to 2.5 times more energy efficient than existing dataflows in processing a state-of-the-art DNN, this work provides guidelines for future DNN accelerator designs.) <|cite_end|>.
Output-Stationary | [
"<|reference_start|> 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture: <|reference_end|>",
"<|reference_start|> 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): <|reference_end|>",
"<|reference_start|> 51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018, Fukuoka, Japan, October 20-24, 2018: <|reference_end|>",
"<|reference_start|> International Conference on Learning Representations (ICLR): <|reference_end|>"
] | [
4,
13,
17,
53
] | {"<|multi_cite_1_1|>": "arxiv-257793", "<|multi_cite_1_2|>": "arxiv-89467", "<|multi_cite_1_3|>": "ss-989285", "<|multi_cite_2_3|>": "ss-973511", "<|multi_cite_2_4|>": "ss-1113488", "<|multi_cite_2_5|>": "ss-918324", "<|multi_cite_2_9|>": "ss-917398", "<|cite_4|>": "arxiv-65675", "<|cite_5|>": "ss-1358800", "<|multi_cite_6_3|>": "ss-984646", "<|multi_cite_6_4|>": "ss-984646", "<|multi_cite_6_6|>": "ss-700612", "<|multi_cite_6_7|>": "ss-1540070", "<|multi_cite_7_1|>": "ss-984646", "<|multi_cite_7_2|>": "ss-989286", "<|multi_cite_7_3|>": "ss-1058583", "<|multi_cite_7_5|>": "arxiv-111741", "<|multi_cite_7_6|>": "ss-1947594", "<|multi_cite_7_7|>": "ss-984646", "<|multi_cite_8_1|>": "ss-832843", "<|multi_cite_8_2|>": "arxiv-174702", "<|multi_cite_9_1|>": "ss-832115", "<|multi_cite_10_1|>": "arxiv-159601", "<|multi_cite_10_2|>": "ss-1376074", "<|multi_cite_10_3|>": "ss-989287", "<|multi_cite_11_1|>": "ss-739405", "<|multi_cite_11_2|>": "arxiv-147133", "<|multi_cite_11_3|>": "arxiv-139350", "<|multi_cite_12_1|>": "ss-832843", "<|multi_cite_12_2|>": "arxiv-174702", "<|multi_cite_12_3|>": "ss-832843", "<|multi_cite_13_1|>": "ss-1516798", "<|multi_cite_13_2|>": "ss-989288", "<|multi_cite_14_1|>": "ss-989288", "<|multi_cite_14_2|>": "ss-989289", "<|multi_cite_14_3|>": "ss-1947594", "<|multi_cite_14_4|>": "ss-984646", "<|cite_15|>": "ss-1358800", "<|cite_16|>": "ss-989290", "<|cite_18|>": "ss-1058583", "<|cite_20|>": "ss-1516798", "<|cite_23|>": "ss-984646", "<|cite_24|>": "ss-984646", "<|cite_25|>": "ss-989288", "<|cite_27|>": "ss-984646", "<|cite_29|>": "ss-832843", "<|cite_30|>": "ss-984646", "<|cite_31|>": "ss-1058583", "<|cite_32|>": "ss-832843", "<|cite_34|>": "ss-1516798", "<|cite_36|>": "ss-832843", "<|cite_37|>": "ss-739405", "<|cite_38|>": "ss-739405", "<|cite_39|>": "ss-1058583", "<|cite_40|>": "ss-1358800", "<|cite_41|>": "ss-832115", "<|cite_42|>": "ss-1331719", "<|cite_43|>": "ss-733637", "<|cite_44|>": "ss-989291", "<|multi_cite_45_1|>": "ss-989292", "<|multi_cite_45_3|>": "ss-711007", "<|cite_46|>": "ss-989292", "<|cite_47|>": "ss-733997", "<|cite_50|>": "ss-984646", "<|cite_51|>": "ss-989286", "<|cite_52|>": "ss-832843", "<|cite_54|>": "ss-989288", "<|cite_55|>": "ss-1516798", "<|cite_56|>": "ss-989293", "<|cite_57|>": "ss-1947594", "<|cite_58|>": "ss-984646", "<|cite_59|>": "ss-989294", "<|cite_60|>": "ss-682754", "<|cite_61|>": "arxiv-211867", "<|cite_63|>": "arxiv-323966", "<|cite_64|>": "arxiv-311043", "<|cite_65|>": "ss-804797", "<|cite_66|>": "arxiv-208059", "<|cite_67|>": "arxiv-184304", "<|cite_68|>": "arxiv-117736"} |
2004.11986 | <|paper_start|> Title: CFR-RL: Traffic Engineering with Reinforcement Learning in SDN
Abstract: CFR-RL: Traffic Engineering with Reinforcement Learning in SDN: Traditional Traffic Engineering (TE) solutions can achieve the optimal or near-optimal performance by rerouting as many flows as possible. However, they do not usually consider the negative impact, such as packet out of order, when frequently rerouting flows in the network. To mitigate the impact of network disturbance, one promising TE solution is forwarding the majority of traffic flows using Equal-Cost Multi-Path (ECMP) and selectively rerouting a few critical flows using Software-Defined Networking (SDN) to balance link utilization of the network. However, critical flow rerouting is not trivial because the solution space for critical flow selection is enormous. Moreover, it is impossible to design a heuristic algorithm for this problem based on fixed and simple rules, since rule-based heuristics are unable to adapt to the changes of the traffic matrix and network dynamics. In this paper, we propose CFR-RL (Critical Flow Rerouting-Reinforcement Learning), a Reinforcement Learning-based scheme that learns a policy to select critical flows for each given traffic matrix automatically. CFR-RL then reroutes these selected critical flows to balance link utilization of the network by formulating and solving a simple Linear Programming (LP) problem. Extensive evaluations show that CFR-RL achieves near-optimal performance by rerouting only 10%-21.3% of total traffic.
Introduction
\label{intro}
The emerging Software-Defined Networking (SDN) provides new opportunities to improve network performance <|cite_start|> (Reference: {OpenFlow: enabling innovation in campus networks: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too) <|cite_end|>. In SDN, the control plane can generate routing policies based on its global view of the network and deploy these policies in the network by installing and updating flow entries at the SDN switches.
Traffic Engineering (TE) is one of important network features for SDN <|cite_start|> (Reference: Traffic Engineering in Software Defined Networks: Software Defined Networking is a new networking paradigm that separates the network control plane from the packet forwarding plane and provides applications with an abstracted centralized view of the distributed network state. A logically centralized controller that has a global network view is responsible for all the control decisions and it communicates with the network-wide distributed forwarding elements via standardized interfaces. Google recently announced [5] that it is using a Software Defined Network (SDN) to interconnect its data centers due to the ease, efficiency and flexibility in performing traffic engineering functions. It expects the SDN architecture to result in better network capacity utilization and improved delay and loss performance. The contribution of this paper is on the effective use of SDNs for traffic engineering especially when SDNs are incrementally introduced into an existing network. In particular, we show how to leverage the centralized controller to get significant improvements in network utilization as well as to reduce packet losses and delays. We show that these improvements are possible even in cases where there is only a partial deployment of SDN capability in a network. We formulate the SDN controller's optimization problem for traffic engineering with partial deployment and develop fast Fully Polynomial Time Approximation Schemes (FPTAS) for solving these problems. We show, by both analysis and ns-2 simulations, the performance gains that are achievable using these algorithms even with an incrementally deployed SDN.) <|cite_end|> <|cite_start|> (Reference: {Traffic Engineering in SDN/OSPF Hybrid Network: Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.) <|cite_end|> <|cite_start|> (Reference: Dynamic hybrid routing: Achieve load balancing for changing traffic demands: Classical TE methods calculate the optimal routing based on a known traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, various methods were proposed in recent years, such as online dynamic TE and robust static routing TE. However, online dynamic TE requires additional overhead on routers for information dissemination and suffers from the transient disruptions during routing protocol convergence, while using one robust static routing to accommodate a wide range of traffic scenarios is unable to ensure near optimality of performance for each individual traffic scenario. This paper presents an approach called dynamic hybrid routing (DHR) to achieve load balancing for a wide range of traffic scenarios. Our basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations. Each routing policy composes of a common basic destination-based routing and a few complementary explicit routing forwarding entries for a small set of selected ingress/egress node pairs. We design a method to find the near-optimal dynamic hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of DHR. We show that DHR achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.) <|cite_end|>, and is usually implemented in the control plane of SDN. The goal of TE is to help Internet Service Providers (ISPs) optimize network performance and resource utilization by configuring the routing across their backbone networks to control traffic distribution <|cite_start|> (Reference: Load balancing in IP networks using generalized destination-based multipath routing: Intradomain traffic engineering (TE) has become an indispensable tool for Internet service providers (ISPs) to optimize network performance and utilize network resources efficiently. Various explicit routing TE methods were recently proposed and have been able to achieve high network performance. However, explicit routing has high complexity and requires large ternary content addressable memories (TCAMs) in the routers. Moreover, it is costly to deploy explicit routing in IP networks. In this paper, we present an approach, called generalized destination-based multipath routing (GDMR), to achieve the same high performance as explicit routing. The main contribution of this paper is that we prove that an arbitrary explicit routing can be converted to a loop-free destination-based routing without any performance penalty for a given traffic matrix. We present a systematic approach including a heuristic algorithm to realize GDMR. Extensive evaluation demonstrates the effectiveness and robustness of GDMR.) <|cite_end|> <|cite_start|> (Reference: Joint Switch Upgrade and Controller Deployment in Hybrid Software-Defined Networks: To improve traffic management ability, Internet Service Providers (ISPs) are gradually upgrading legacy network devices to programmable devices that support Software-Defined Networking (SDN). The coexistence of legacy and SDN devices gives rise to a hybrid SDN. Existing hybrid SDNs do not consider the potential performance issues introduced by a centralized SDN controller: flow requests processed by a highly loaded controller may experience long tail processing delay; inappropriate multi-controller deployment could increase the propagation delay of flow requests. In this paper, we propose to jointly consider the deployment of SDN switches and their controllers for hybrid SDNs. We formulate the joint problem as an optimization problem that maximizes the number of flows that can be controlled and managed by the SDN and minimizes the propagation delay of flow requests between SDN controllers and switches under a given upgrade budget constraint. We show this problem is NP-hard. To efficiently solve the problem, we propose some techniques (e.g., strengthening the constraints and adding additional valid inequalities) to accelerate the global optimization solver for solving the problem for small networks and an efficient heuristic algorithm for solving it for large networks. The simulation results from real network topologies illustrate the effectiveness of the proposed techniques and show that our proposed heuristic algorithm uses a small number of controllers to manage a high amount of flows with good performance.) <|cite_end|>. Due to dynamic load fluctuation among the nodes, traditional TE <|cite_start|> (Reference: Explicit routing algorithms for internet traffic engineering: This paper considers explicit routing algorithms for Internet traffic engineering. Explicit routing is seen to be a much more capable solution for improving network utilization than the current destination-based routing and the multi-protocol label switching (MPLS) standard has made explicit routes implementable. ISP can now have fine granularity control over the traffic distribution across their backbones by carefully overlaying explicit routes over the physical network. The basic traffic engineering problem is how to set up explicit routes to meet bandwidth demands between the edge nodes of the network and at the same time to optimize the network performance. We model the traffic engineering problem as an optimization problem with the objective of minimizing congestion and maximizing potential for traffic growth. We present two mathematical formulations, one linear programming for the case of allowing demand bifurcation and one integer programming for the case of disallowing demand bifurcation. While the bifurcation case can be solved to optimality, we show that the non-bifurcation case is NP-hard. Four heuristic schemes are proposed for the non-bifurcation case, with the most sophisticated one being based on re-routing of split demands in the optimal solution of the bifurcation case. The performance of these heuristic schemes are tested in a large backbone topology. Our results show that shortest-path and minimum hop algorithms, although widely used in current routing protocols, perform poorly, white the re-routing approach performs best.) <|cite_end|> <|cite_start|> (Reference: Traffic engineering with MPLS: From the Publisher:
Optimize network bandwidth with Traffic Engineering and MPLS
Hard to find information on how to use MPLS traffic engineering to optimize network bandwidth, save on network cost, and improve customer satisfactionUnderstand the theoretical underpinnings of the various protocols that comprise traffic engineeringLearn basic and advanced configuration of traffic engineering and related services like QoS and ATM interaction Suggested network designs, configuration examples, and an end-to-end case study provide readers with practical, working solutions readers can implement on their own networks
As corporations seek to reduce costs, improve efficiencies, gain market share and profit they increasingly are looking to their own information technology systems as a means to this end. Traffic engineering allows engineers to maximize network resources. It resolves the issue of having large amount of traffic on certain portions of the network while other portions go under-utilized. Traffic Engineering with MPLS provides readers with information on how to use MPLS traffic engineering and associated features to optimize network bandwidth. The book covers forwarding fundamentals, traffic engineering theory, protocol descriptions, deployment guidelines, configuration, show commands, and debugs. This book is a one-stop reference for understanding MPLS traffic engineering and implementing it on the network. A comprehensive case study is used to show a complete MPLS traffic engineering deployment.) <|cite_end|> <|cite_start|> (Reference: Optimizing OSPF/IS-IS Weights in a Changing World: A system of techniques is presented for optimizing open shortest path first (OSPF) or intermediate system-intermediate system (IS-IS) weights for intradomain routing in a changing world, the goal being to avoid overloaded links. We address predicted periodic changes in traffic as well as problems arising from link failures and emerging hot spots.) <|cite_end|> <|cite_start|> (Reference: Optimization of internet protocol network design and routing: We consider network design and routing for Internet Protocol (IP) traffic. The design problem concerns capacity dimensioning of communication links, where the design cost consists of fixed charges and linear capacity expansion costs. The optimization problem also concerns determining the amount of traffic demand to be carried by the network and the metric used by a shortest path routing protocol. We present a novel linear mixed‐integer mathematical formulation and two heuristic solution procedures. The first heuristic uses mixed‐integer programming to generate a sequence of routing solutions. The second solution approach is a simulated annealing meta heuristic. Computational experiments for synthesized and real‐life networks show that high‐quality solutions can be obtained by both approaches. © 2003 Wiley Periodicals, Inc.) <|cite_end|> <|cite_start|> (Reference: Optimal link weights for ip-based networks supporting hose-model vpns: From traffic engineering point of view, hose-model VPNs are much easier to use for customers than pipe-model VPNs. In this paper we explore the optimal weight setting to support hose-model VPN traffic in an IP-based hop-by-hop routing network. We try to answer the following questions: (1) What is the maximum amount of hose-model VPN traffic with bandwidth guarantees that can be admitted to an IP-based hop-by-hop routing network (as opposed to an MPLS-based network), and (2) what is the optimal link weight setting that can achieve that? We first present a mixed-integer programming formulation to compute the optimal link weights that can maximize the ingress and egress VPN traffic admissible to a hop-by-hop routing network. We also present a heuristic algorithm for solving the link weight searching problem for large networks. We show simulation results to demonstrate the effectiveness of the search algorithm.) <|cite_end|> <|cite_start|> (Reference: Optimizing network performance using weighted multipath routing: Equal-Cost Multipath (ECMP) routing has been widely adopted to perform load balancing. With ECMP, a router can maintain multiple next hops for a destination IP prefix. The most common method used by such routers is to split traffic with per-flow basis evenly among those next hops. This approach, although simple, cannot achieve optimal load balancing. In this paper we study the optimal configuration of weighted ECMP, where traffic splitting among the available paths is based on a set of pre-determined ratios. The contribution of this paper is two-fold. First, we develop a model to obtain the split ratios such that the overall network end-to-end delay is optimized. This is important because better delay performance is a result of better bandwidth allocation and has a direct impact on application, while most existing work tries to minimize the traffic load on the most utilized link. Second, we prove that the problem can be first solved by using a simple flow-based routing model and then converting the results to apply to IP networks, where destination-based forwarding is used. We present a heuristic algorithm to find the near-optimal weight configurations and demonstrate the effectiveness of the algorithm using computer simulations.) <|cite_end|> reroutes many flows periodically to balance the load on each link to minimize network congestion probability, where a flow is defined as a source-destination pair. One usually formulates the flow routing problem with a particular performance metric as a specific objective function for optimization. For a given traffic matrix, one often wants to route all the flows in such a way that the maximum link utilization in the network is minimized.
Although traditional TE solutions can achieve the optimal or near-optimal performance by rerouting as many flows as possible, they do not consider the negative impact, such as packet out of order, when rerouting the flows in the network. To reach the optimal performance, TE solutions might reroute many traffic flows to just slightly reduce the link utilization on the most congested link, leading to significant network disturbance and service disruption. For example, a flow between two nodes in a backbone network is aggregated of many micro-flows (e.g., five tuples-based TCP flows) of different applications. Changing the path of a flow could temporarily affect many TCP flows' normal operation. Packets loss or out-of-order may cause duplicated ACK transmissions, triggering the sender to react and reduce its congestion window size and hence decrease its sending rate, eventually increasing the flow's completion time and degrading the flow's Quality of Service (QoS). In addtion, rerouting all flows in the network could incur a high burden on the SDN controller to calculate and deploy new flow paths <|cite_start|> (Reference: Dynamic hybrid routing: Achieve load balancing for changing traffic demands: Classical TE methods calculate the optimal routing based on a known traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, various methods were proposed in recent years, such as online dynamic TE and robust static routing TE. However, online dynamic TE requires additional overhead on routers for information dissemination and suffers from the transient disruptions during routing protocol convergence, while using one robust static routing to accommodate a wide range of traffic scenarios is unable to ensure near optimality of performance for each individual traffic scenario. This paper presents an approach called dynamic hybrid routing (DHR) to achieve load balancing for a wide range of traffic scenarios. Our basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations. Each routing policy composes of a common basic destination-based routing and a few complementary explicit routing forwarding entries for a small set of selected ingress/egress node pairs. We design a method to find the near-optimal dynamic hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of DHR. We show that DHR achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.) <|cite_end|>. Because rerouting flows to reduce congestion in backbone networks could adversely affect the quality of users' experience, network operators have no desire to deploy these traditional TE solutions in their networks unless reducing network disturbance is taken into the consideration in designing the TE solutions.
To mitigate the impact of network disturbance, one promising TE solution is forwarding majority of traffic flows using Equal-Cost Multi-Path (ECMP) and selectively rerouting a few \textit{critical flows} using SDN to balance link utilization of the network, where a critical flow is defined as a flow with a dominant impact to network performance (e.g., a flow on the most congested link) <|cite_start|> (Reference: Dynamic hybrid routing: Achieve load balancing for changing traffic demands: Classical TE methods calculate the optimal routing based on a known traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, various methods were proposed in recent years, such as online dynamic TE and robust static routing TE. However, online dynamic TE requires additional overhead on routers for information dissemination and suffers from the transient disruptions during routing protocol convergence, while using one robust static routing to accommodate a wide range of traffic scenarios is unable to ensure near optimality of performance for each individual traffic scenario. This paper presents an approach called dynamic hybrid routing (DHR) to achieve load balancing for a wide range of traffic scenarios. Our basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations. Each routing policy composes of a common basic destination-based routing and a few complementary explicit routing forwarding entries for a small set of selected ingress/egress node pairs. We design a method to find the near-optimal dynamic hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of DHR. We show that DHR achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.) <|cite_end|> <|cite_start|> (Reference: Load balancing for multiple traffic matrices using sdn hybrid routing: Classical traffic engineering (TE) methods calculate the optimal routing based on a single traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, it is of interest to find a good routing configuration to accommodate multiple possible traffic scenarios. There are two major approaches to achieve load balancing for multiple traffic matrices: destination-based routing and explicit routing. It has been shown that explicit routing performs better than destination-based routing for multiple traffic matrices. However, explicit routing has high complexity and requires large Ternary Content Addressable Memory (TCAM) in the routers. Thus, it is power hungry and unscalable. This paper presents an approach called hybrid routing to achieve load balancing for multiple traffic matrices with low complexity and good scalability. Our basic idea is to complement destination-based routing with a small number of explicit routing forwarding entries to take advantage of both two routing approaches. Hybrid routing greatly reduces the number of forwarding entries compared with pure explicit routing. This has great value for practice in that the scheme requires very small TCAM to implement. Hybrid routing is very suitable for implementation using SDN. A heuristic algorithm is developed to obtain the near-optimal hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of hybrid routing. The results show that hybrid routing achieves near-optimal load balancing compared with pure explicit routing. In particular, hybrid routing saves at least 84.6% TCAM resources in all practical networks used in our evaluation.) <|cite_end|>. Existing works show that critical flows exist in a given traffic matrix <|cite_start|> (Reference: Dynamic hybrid routing: Achieve load balancing for changing traffic demands: Classical TE methods calculate the optimal routing based on a known traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, various methods were proposed in recent years, such as online dynamic TE and robust static routing TE. However, online dynamic TE requires additional overhead on routers for information dissemination and suffers from the transient disruptions during routing protocol convergence, while using one robust static routing to accommodate a wide range of traffic scenarios is unable to ensure near optimality of performance for each individual traffic scenario. This paper presents an approach called dynamic hybrid routing (DHR) to achieve load balancing for a wide range of traffic scenarios. Our basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations. Each routing policy composes of a common basic destination-based routing and a few complementary explicit routing forwarding entries for a small set of selected ingress/egress node pairs. We design a method to find the near-optimal dynamic hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of DHR. We show that DHR achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.) <|cite_end|>. ECMP reduces the congestion probability by equally splitting traffic on equal-cost paths while critical flow rerouting aims to achieve further performance improvement with low network disturbance.
The critical flow rerouting problem can be decoupled into two sub-problems: (1) identifying critical flows and (2) rerouting them to achieve good performance. Although sub-problem (2) is relatively easy to solve by formulating it as a Linear Programming (LP) optimization problem, solving sub-problem (1) is not trivial because the solution space is huge. For example, if we want to find 10 critical flows among 100 flows, the solution space has $C_{100}^{10} \approx 17$ trillion combinations. Considering the fact that traffic matrix varies in the level of minutes, an efficient solution should be able to quickly and effectively identify the critical flows for each traffic matrix. Unfortunately, it is impossible to design a heuristic algorithm for the above algorithmically-hard problem based on fixed and simple rules. This is because rule-based heuristics are unable to adapt to the changes of the traffic matrix and network dynamics and thus unable to guarantee their performance when their design assumptions are violated, as later shown in Section \ref{sec:evaluation}.
In this paper, we propose CFR-RL (Critical Flow Rerouting-Reinforcement Learning), a Reinforcement Learning-based scheme that performs critical flow selection followed by rerouting with linear programming. CFR-RL learns a policy to select critical flows purely through observations, without any domain-specific rule-based heuristic. It starts from scratch without any prior knowledge, and gradually learns to make better selections through reinforcement, in the form of reward signals that reflects network performance for past selections. By continuing to observe the actual performance of past selections, CFR-RL would optimize its selection policy for various traffic matrices as time goes. Once training is done, CFR-RL will efficiently and effectively select a small set of critical flows for each given traffic matrix, and reroute them to balance link utilization of the network by formulating and solving a simple linear programming optimization problem.
The main contributions of this paper are summarized as follows:
\begin{enumerate}
\item We consider the impact of flow rerouting to network disturbance in our TE design and propose an effective scheme that not only minimizes the maximum link utilization but also reroutes only a small number of flows to reduce network disturbance.
\item We customize a RL approach to learn the critical flow selection policy, and utilize LP as a reward function to generate reward signals. This RL$+$LP combined approach turns out to be surprisingly powerful.
\item We evaluate and compare CFR-RL with other rule-based heuristic schemes by conducting extensive experiments on different topologies with both real and synthesized traffic. CFR-RL not only outperforms rule-based heuristic schemes by up to 12.2\%, but also reroutes 11.4\%-14.7\% less traffic on average. Overall, CFR-RL is able to achieve near-optimal performance by rerouting only 10\%-21.3\% of total traffic. In addition, the evalution results show that CFR-RL is able to generalize to unseen traffic matrices.
\end{enumerate}
The remainder of this paper is organized as follows. Section II describes the related works. Section III presents the system design. Section IV discusses how to train the critical flow selection policy using a RL-based approach. Section V describes how to reroute the critical flows. Section VI evaluates the effectiveness of our scheme. Section VII concludes the paper and discusses future work.
Related Work
\label{relatedworks}
\subsection{Traditional TE Solutions}
In Multiprotocol Label Switching (MPLS) networks, a routing problem has been formulated as an optimization problem where explicit routes are obtained for each source-destination pair to distribute traffic flows <|cite_start|> (Reference: Explicit routing algorithms for internet traffic engineering: This paper considers explicit routing algorithms for Internet traffic engineering. Explicit routing is seen to be a much more capable solution for improving network utilization than the current destination-based routing and the multi-protocol label switching (MPLS) standard has made explicit routes implementable. ISP can now have fine granularity control over the traffic distribution across their backbones by carefully overlaying explicit routes over the physical network. The basic traffic engineering problem is how to set up explicit routes to meet bandwidth demands between the edge nodes of the network and at the same time to optimize the network performance. We model the traffic engineering problem as an optimization problem with the objective of minimizing congestion and maximizing potential for traffic growth. We present two mathematical formulations, one linear programming for the case of allowing demand bifurcation and one integer programming for the case of disallowing demand bifurcation. While the bifurcation case can be solved to optimality, we show that the non-bifurcation case is NP-hard. Four heuristic schemes are proposed for the non-bifurcation case, with the most sophisticated one being based on re-routing of split demands in the optimal solution of the bifurcation case. The performance of these heuristic schemes are tested in a large backbone topology. Our results show that shortest-path and minimum hop algorithms, although widely used in current routing protocols, perform poorly, white the re-routing approach performs best.) <|cite_end|> <|cite_start|> (Reference: Traffic engineering with MPLS: From the Publisher:
Optimize network bandwidth with Traffic Engineering and MPLS
Hard to find information on how to use MPLS traffic engineering to optimize network bandwidth, save on network cost, and improve customer satisfactionUnderstand the theoretical underpinnings of the various protocols that comprise traffic engineeringLearn basic and advanced configuration of traffic engineering and related services like QoS and ATM interaction Suggested network designs, configuration examples, and an end-to-end case study provide readers with practical, working solutions readers can implement on their own networks
As corporations seek to reduce costs, improve efficiencies, gain market share and profit they increasingly are looking to their own information technology systems as a means to this end. Traffic engineering allows engineers to maximize network resources. It resolves the issue of having large amount of traffic on certain portions of the network while other portions go under-utilized. Traffic Engineering with MPLS provides readers with information on how to use MPLS traffic engineering and associated features to optimize network bandwidth. The book covers forwarding fundamentals, traffic engineering theory, protocol descriptions, deployment guidelines, configuration, show commands, and debugs. This book is a one-stop reference for understanding MPLS traffic engineering and implementing it on the network. A comprehensive case study is used to show a complete MPLS traffic engineering deployment.) <|cite_end|>. Using Open Shortest Path First (OSPF) and ECMP protocols, <|cite_start|> (Reference: Optimizing OSPF/IS-IS Weights in a Changing World: A system of techniques is presented for optimizing open shortest path first (OSPF) or intermediate system-intermediate system (IS-IS) weights for intradomain routing in a changing world, the goal being to avoid overloaded links. We address predicted periodic changes in traffic as well as problems arising from link failures and emerging hot spots.) <|cite_end|> <|cite_start|> (Reference: Optimization of internet protocol network design and routing: We consider network design and routing for Internet Protocol (IP) traffic. The design problem concerns capacity dimensioning of communication links, where the design cost consists of fixed charges and linear capacity expansion costs. The optimization problem also concerns determining the amount of traffic demand to be carried by the network and the metric used by a shortest path routing protocol. We present a novel linear mixed‐integer mathematical formulation and two heuristic solution procedures. The first heuristic uses mixed‐integer programming to generate a sequence of routing solutions. The second solution approach is a simulated annealing meta heuristic. Computational experiments for synthesized and real‐life networks show that high‐quality solutions can be obtained by both approaches. © 2003 Wiley Periodicals, Inc.) <|cite_end|> <|cite_start|> (Reference: Optimal link weights for ip-based networks supporting hose-model vpns: From traffic engineering point of view, hose-model VPNs are much easier to use for customers than pipe-model VPNs. In this paper we explore the optimal weight setting to support hose-model VPN traffic in an IP-based hop-by-hop routing network. We try to answer the following questions: (1) What is the maximum amount of hose-model VPN traffic with bandwidth guarantees that can be admitted to an IP-based hop-by-hop routing network (as opposed to an MPLS-based network), and (2) what is the optimal link weight setting that can achieve that? We first present a mixed-integer programming formulation to compute the optimal link weights that can maximize the ingress and egress VPN traffic admissible to a hop-by-hop routing network. We also present a heuristic algorithm for solving the link weight searching problem for large networks. We show simulation results to demonstrate the effectiveness of the search algorithm.) <|cite_end|> attempt to balance link utilization as even as possible by carefully tuning the link costs to adjust path selection in ECMP. OSPF-OMP (OMP, Optimized Multipath) <|cite_start|> (Reference: OSPF Optimized Multipath (OSPF-OMP): ) <|cite_end|>, a variation of OSPF, attempts to dynamically determine the optimal allocation of traffic among multiple equal-cost paths based on the exchange of special traffic-load control messages. Weighted ECMP \cite {zhang2012optimizing} extends ECMP to allow weighted traffic splitting at each node and achieves significant performance improvement over ECMP. Two-phase routing optimizes routing performance by selecting a set of intermediate nodes and tuning the traffic split ratios to the nodes <|cite_start|> (Reference: Oblivious routing of highly variable traffic in service overlays and IP backbones: The emergence of new applications on the Internet like voice-over-IP, peer-to-peer, and video-on-demand has created highly dynamic and changing traffic patterns. In order to route such traffic with quality-of-service (QoS) guarantees without requiring detection of traffic changes in real-time or reconfiguring the network in response to it, a routing and bandwidth allocation scheme has been recently proposed that allows preconfiguration of the network such that all traffic patterns permissible within the network's natural ingress-egress capacity constraints can be handled in a capacity efficient manner. The scheme routes traffic in two phases. In the first phase, incoming traffic is sent from the source to a set of intermediate nodes and then, in the second phase, from the intermediate nodes to the final destination. The traffic in the first phase is distributed to the intermediate nodes in predetermined proportions that depend on the intermediate nodes. In this paper, we develop linear programming formulations and a fast combinatorial algorithm for routing under the scheme so as to maximize throughput (or, minimize maximum link utilization). We compare the throughput performance of the scheme with that of the optimal scheme among the class of all schemes that are allowed to even make the routing dependent on the traffic matrix. For our evaluations, we use actual Internet Service Provider topologies collected for the Rocketfuel project. We also bring out the versatility of the scheme in not only handling widely fluctuating traffic but also accommodating applicability to several widely differing networking scenarios, including i) economical Virtual Private Networks (VPNs); ii) supporting indirection in specialized service overlay models like Internet Indirection Infrastructure (i3); iii) adding QoS guarantees to services that require routing through a network-based middlebox; and iv) reducing IP layer transit traffic and handling extreme traffic variability in IP-over-optical networks without dynamic reconfiguration of the optical layer. The two desirable properties of supporting indirection in specialized service overlay models and static optical layer provisioning in IP-over-optical networks are not present in other approaches for routing variable traffic, such as direct source-destination routing along fixed paths.) <|cite_end|> <|cite_start|> (Reference: Two phase load balanced routing using ospf: The Internet traffic is growing, and its nature changes because of new applications. Multimedia applications require bandwidth reservations that were not needed initially when the file transfers dominated the Internet. P2P applications are making traffic patterns impossible to predict, and the traffic loads generated at nodes need to be routed regardless of the traffic pattern. When the guaranteed node traffic loads are known, bandwidth reservations can be made simple as will be explained in the paper. The shortest path routing (SPR) protocols used on the Internet today do not maximize the guaranteed node traffic loads, and do not provide scalable and fast bandwidth reservations. Load balancing can improve the network throughput for arbitrary traffic pattern. In this paper we analyze and implement a routing protocol that is based on load balancing and a commonly used shortest path routing protocol, and is, consequently, termed as LB-SPR. LB-SPR is optimized for an arbitrary traffic pattern, i.e. it does not assume a particular traffic matrix. Optimization assumes only the weights assigned to the network nodes according to their estimated demands. It will be shown that the optimized routing achieves the throughputs which are significantly higher than those provided by the currently used SPR protocols, such as OSPF or RIP. Importantly, LB-SPR calculates the guaranteed traffic loads and so allows fast autonomic bandwidth reservations which are the key for the successful support of triple-play applications, including video and audio applications that require high QoS. An actual modification of the TCP/IP stack that includes LBSPR is also described. Using the signaling mechanisms of the OSPF protocol, the information needed to perform the routing optimization is automatically distributed among the network nodes whenever the network topology changes. The LB-SPR implementation is validated on a sample network using a popular virtualization tool - Xen.) <|cite_end|>. In the first phase, each source sends traffic to the intermediate nodes based on predetermined split ratios, and in the second phase, the intermediate nodes then deliver the traffic to the final destinations. This approach requires IP tunnels, optical-layer circuits, or label switched paths in each phase.
\subsection{SDN-Based TE Solutions}
Thanks to the flexible routing policy from the emerging SDN, dynamic hybrid routing <|cite_start|> (Reference: Dynamic hybrid routing: Achieve load balancing for changing traffic demands: Classical TE methods calculate the optimal routing based on a known traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, various methods were proposed in recent years, such as online dynamic TE and robust static routing TE. However, online dynamic TE requires additional overhead on routers for information dissemination and suffers from the transient disruptions during routing protocol convergence, while using one robust static routing to accommodate a wide range of traffic scenarios is unable to ensure near optimality of performance for each individual traffic scenario. This paper presents an approach called dynamic hybrid routing (DHR) to achieve load balancing for a wide range of traffic scenarios. Our basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations. Each routing policy composes of a common basic destination-based routing and a few complementary explicit routing forwarding entries for a small set of selected ingress/egress node pairs. We design a method to find the near-optimal dynamic hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of DHR. We show that DHR achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.) <|cite_end|> achieves load balancing for a wide range of traffic scenarios by dynamically rebalancing traffic to react to traffic fluctuations with a preconfigured routing policy. Agarwal et al. <|cite_start|> (Reference: Traffic Engineering in Software Defined Networks: Software Defined Networking is a new networking paradigm that separates the network control plane from the packet forwarding plane and provides applications with an abstracted centralized view of the distributed network state. A logically centralized controller that has a global network view is responsible for all the control decisions and it communicates with the network-wide distributed forwarding elements via standardized interfaces. Google recently announced [5] that it is using a Software Defined Network (SDN) to interconnect its data centers due to the ease, efficiency and flexibility in performing traffic engineering functions. It expects the SDN architecture to result in better network capacity utilization and improved delay and loss performance. The contribution of this paper is on the effective use of SDNs for traffic engineering especially when SDNs are incrementally introduced into an existing network. In particular, we show how to leverage the centralized controller to get significant improvements in network utilization as well as to reduce packet losses and delays. We show that these improvements are possible even in cases where there is only a partial deployment of SDN capability in a network. We formulate the SDN controller's optimization problem for traffic engineering with partial deployment and develop fast Fully Polynomial Time Approximation Schemes (FPTAS) for solving these problems. We show, by both analysis and ns-2 simulations, the performance gains that are achievable using these algorithms even with an incrementally deployed SDN.) <|cite_end|> consider a network with partially deployed SDN switches. They improve network utilization and reduce packet loss by strategically placing the controller and SDN switches. Guo et al. <|cite_start|> (Reference: {Traffic Engineering in SDN/OSPF Hybrid Network: Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.) <|cite_end|> propose a novel algorithm named SOTE to minimize the maximum link utilization in an SDN/OSPF hybrid network.
\subsection{Machine Learning-Based TE Solutions}
Machine learning has been used to improve the performance of backbone networks and data center networks. For backbone networks, Geyer et al. <|cite_start|> (Reference: Learning and Generating Distributed Routing Protocols Using Graph-Based Deep Learning: Automated network control and management has been a long standing target of network protocols. We address in this paper the question of automated protocol design, where distributed networked nodes have to cooperate to achieve a common goal without a priori knowledge on which information to exchange or the network topology. While reinforcement learning has often been proposed for this task, we propose here to apply recent methods from semi-supervised deep neural networks which are focused on graphs. Our main contribution is an approach for applying graph-based deep learning on distributed routing protocols via a novel neural network architecture named Graph-Query Neural Network. We apply our approach to the tasks of shortest path and max-min routing. We evaluate the learned protocols in cold-start and also in case of topology changes. Numerical results show that our approach is able to automatically develop efficient routing protocols for those two use-cases with accuracies larger than 95%. We also show that specific properties of network protocols, such as resilience to packet loss, can be explicitly included in the learned protocol.) <|cite_end|> design an automatic network protocol using semi-supervised deep learning. Sun et al. <|cite_start|> (Reference: Sinet: Enabling scalable network routing with deep reinforcement learning on partial nodes: In this paper, we propose SINET, a scalable and intelligent network control framework for routing optimization. SINET uses the idea of partial control to collect network information from critical nodes and uses Deep Reinforcement Learning (DRL) to dynamically optimizes routing policies based on the collected network information. Simulation results show that SINET can reduce the average flow completion time and exhibit better robustness against minor topology changes, compared to existing DRL-based schemes.) <|cite_end|> selectively control a set of nodes and use a RL-based policy to dynamically change the routing decision of flows traversing the selected nodes. To minimize signaling delay in large SDNs, Lin et al. <|cite_start|> (Reference: QoS-aware adaptive routing in multi-layer hierarchical software defined networks: a reinforcement learning approach: Software-defined networks (SDNs) have been recognized as the next-generation networking paradigm that decouples the data forwarding from the centralized control. To realize the merits of dedicated QoS provisioning and fast route (re-)configuration services over the decoupled SDNs, various QoS requirements in packet delay, loss, and throughput should be supported by an efficient transportation with respect to each specific application. In this paper, a QoS-aware adaptive routing (QAR) is proposed in the designed multi-layer hierarchical SDNs. Specifically, the distributed hierarchical control plane architecture is employed to minimize signaling delay in large SDNs via three-levels design of controllers, i.e., the super, domain (or master), and slave controllers. Furthermore, QAR algorithm is proposed with the aid of reinforcement learning and QoS-aware reward function, achieving a time-efficient, adaptive, QoS-provisioning packet forwarding. Simulation results confirm that QAR outperforms the existing learning solution and provides fast convergence with QoS provisioning, facilitating the practical implementations in large-scale software service-defined networks.) <|cite_end|> employ a distributed three-level control plane architecture coupled with a RL-based solution named QoS-aware Adaptive Routing. Xu et al. <|cite_start|> (Reference: Experience-driven Networking: A Deep Reinforcement Learning based Approach: Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-ofthe-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.) <|cite_end|> use RL to optimize the throughput and delay in TE. AuTO <|cite_start|> (Reference: AuTO: scaling deep reinforcement learning for datacenter-scale automatic traffic optimization: Traffic optimizations (TO, e.g. flow scheduling, load balancing) in datacenters are difficult online decision-making problems. Previously, they are done with heuristics relying on operators' understanding of the workload and environment. Designing and implementing proper TO algorithms thus take at least weeks. Encouraged by recent successes in applying deep reinforcement learning (DRL) techniques to solve complex online control problems, we study if DRL can be used for automatic TO without human-intervention. However, our experiments show that the latency of current DRL systems cannot handle flow-level TO at the scale of current datacenters, because short flows (which constitute the majority of traffic) are usually gone before decisions can be made. Leveraging the long-tail distribution of datacenter traffic, we develop a two-level DRL system, AuTO, mimicking the Peripheral & Central Nervous Systems in animals, to solve the scalability problem. Peripheral Systems (PS) reside on end-hosts, collect flow information, and make TO decisions locally with minimal delay for short flows. PS's decisions are informed by a Central System (CS), where global traffic information is aggregated and processed. CS further makes individual TO decisions for long flows. With CS&PS, AuTO is an end-to-end automatic TO system that can collect network information, learn from past decisions, and perform actions to achieve operator-defined goals. We implement AuTO with popular machine learning frameworks and commodity servers, and deploy it on a 32-server testbed. Compared to existing approaches, AuTO reduces the TO turn-around time from weeks to ~100 milliseconds while achieving superior performance. For example, it demonstrates up to 48.14% reduction in average flow completion time (FCT) over existing solutions.) <|cite_end|> is developed to optimize routing traffic in data center networks with a two-layer RL. One is called the Peripheral System for deploying hosts and routing small flows, and the other one is called the Central System for collecting global traffic information and routing large flows.
However, all of the above works do not consider mitigating the impact of network disturbance and service disruption caused by rerouting. <|paper_end|> | [
"<|reference_start|> Traffic engineering with MPLS: From the Publisher: \nOptimize network bandwidth with Traffic Engineering and MPLS \n \nHard to find information on how to use MPLS traffic engineering to optimize network bandwidth, save on network cost, and improve customer satisfactionUnderstand the theoretical underpinnings of the various protocols that comprise traffic engineeringLearn basic and advanced configuration of traffic engineering and related services like QoS and ATM interaction Suggested network designs, configuration examples, and an end-to-end case study provide readers with practical, working solutions readers can implement on their own networks \n \nAs corporations seek to reduce costs, improve efficiencies, gain market share and profit they increasingly are looking to their own information technology systems as a means to this end. Traffic engineering allows engineers to maximize network resources. It resolves the issue of having large amount of traffic on certain portions of the network while other portions go under-utilized. Traffic Engineering with MPLS provides readers with information on how to use MPLS traffic engineering and associated features to optimize network bandwidth. The book covers forwarding fundamentals, traffic engineering theory, protocol descriptions, deployment guidelines, configuration, show commands, and debugs. This book is a one-stop reference for understanding MPLS traffic engineering and implementing it on the network. A comprehensive case study is used to show a complete MPLS traffic engineering deployment. <|reference_end|>",
"<|reference_start|> Load balancing for multiple traffic matrices using sdn hybrid routing: Classical traffic engineering (TE) methods calculate the optimal routing based on a single traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, it is of interest to find a good routing configuration to accommodate multiple possible traffic scenarios. There are two major approaches to achieve load balancing for multiple traffic matrices: destination-based routing and explicit routing. It has been shown that explicit routing performs better than destination-based routing for multiple traffic matrices. However, explicit routing has high complexity and requires large Ternary Content Addressable Memory (TCAM) in the routers. Thus, it is power hungry and unscalable. This paper presents an approach called hybrid routing to achieve load balancing for multiple traffic matrices with low complexity and good scalability. Our basic idea is to complement destination-based routing with a small number of explicit routing forwarding entries to take advantage of both two routing approaches. Hybrid routing greatly reduces the number of forwarding entries compared with pure explicit routing. This has great value for practice in that the scheme requires very small TCAM to implement. Hybrid routing is very suitable for implementation using SDN. A heuristic algorithm is developed to obtain the near-optimal hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of hybrid routing. The results show that hybrid routing achieves near-optimal load balancing compared with pure explicit routing. In particular, hybrid routing saves at least 84.6% TCAM resources in all practical networks used in our evaluation. <|reference_end|>",
"<|reference_start|> OSPF Optimized Multipath (OSPF-OMP): <|reference_end|>",
"<|reference_start|> Experience-driven Networking: A Deep Reinforcement Learning based Approach: Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-ofthe-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance. <|reference_end|>"
] | [
7,
14,
21,
30
] | {"<|cite_1|>": "ss-734579", "<|multi_cite_2_1|>": "ss-2396569", "<|multi_cite_2_2|>": "ss-888960", "<|multi_cite_2_3|>": "ss-1938497", "<|multi_cite_3_1|>": "ss-1938498", "<|multi_cite_3_2|>": "arxiv-196358", "<|multi_cite_4_1|>": "ss-1938499", "<|multi_cite_4_2|>": "ss-1938500", "<|multi_cite_4_3|>": "ss-967809", "<|multi_cite_4_4|>": "ss-1938501", "<|multi_cite_4_5|>": "ss-1938502", "<|multi_cite_4_6|>": "ss-1938503", "<|cite_5|>": "ss-1938497", "<|multi_cite_6_1|>": "ss-1938497", "<|multi_cite_6_2|>": "ss-1938504", "<|cite_7|>": "ss-1938497", "<|multi_cite_8_1|>": "ss-1938499", "<|multi_cite_8_2|>": "ss-1938500", "<|multi_cite_9_1|>": "ss-967809", "<|multi_cite_9_2|>": "ss-1938501", "<|multi_cite_9_3|>": "ss-1938502", "<|cite_10|>": "ss-1696516", "<|multi_cite_11_1|>": "ss-1677622", "<|multi_cite_11_2|>": "ss-1938505", "<|cite_12|>": "ss-1938497", "<|cite_13|>": "ss-2396569", "<|cite_14|>": "ss-888960", "<|cite_15|>": "ss-1120246", "<|cite_16|>": "ss-768893", "<|cite_17|>": "ss-1450490", "<|cite_18|>": "arxiv-145722", "<|cite_19|>": "ss-1452300"} |
2212.04866 | <|paper_start|> Title: Deep Learning of Causal Structures in High Dimensions
Abstract: Deep Learning of Causal Structures in High Dimensions: Recent years have seen rapid progress at the intersection between causality and machine learning. Motivated by scientific applications involving high-dimensional data, in particular in biomedicine, we propose a deep neural architecture for learning causal relationships between variables from a combination of empirical data and prior causal knowledge. We combine convolutional and graph neural networks within a causal risk framework to provide a flexible and scalable approach. Empirical results include linear and nonlinear simulations (where the underlying causal structures are known and can be directly compared against), as well as a real biological example where the models are applied to high-dimensional molecular data and their output compared against entirely unseen validation experiments. These results demonstrate the feasibility of using deep learning approaches to learn causal networks in large-scale problems spanning thousands of variables.
Introduction
Causality remains an important open area in
machine learning, statistics and related fields
\citep[see e.g.][]{Peters2017,Arjovsky2019} and
the task of identifying
causal relationships between variables
is key in many scientific domains
including in particular biomedicine \citep[see e.g.][]{glymour2016causal,Hill2016}.
The rich body of work in learning causal structures includes, among other methods, PC <|cite_start|> (Reference: Causation, Prediction, and Search: The writing is not uniformly polished and is scattered with long, awkward sentences that require some effort to unravel. I wonder if this is the result of infelicitous translation from the original German version (Wellek 1994). There are also numerous small typographical errors. More careful editing could have solved these problems before publication. There are no exercises, and so I would hesitate to use the book as a text (although it should be noted that this is not one of the author’s stated aims). Although Testing Statistical Hypotheses of Equivalence has some weaknesses, it is a useful reference for those interested in the question of equivalence testing, particularly in biological applications.) <|cite_end|>, LiNGAM <|cite_start|> (Reference: A linear non-gaussian acyclic model for causal discovery: In recent years, several methods have been proposed for the discovery of causal structure from non-experimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances. The solution relies on the use of the statistical method known as independent component analysis, and does not require any pre-specified time-ordering of the variables. We provide a complete Matlab package for performing this LiNGAM analysis (short for Linear Non-Gaussian Acyclic Model), and demonstrate the effectiveness of the method using artificially generated data and real-world data.) <|cite_end|>, IDA <|cite_start|> (Reference: Estimating high-dimensional intervention effects from observational data: We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can be estimated using intervention calculus. In this paper, we combine these two parts. For each DAG in the estimated equivalence class, we use intervention calculus to estimate the causal effects of the covariates on the response. This yields a collection of estimated causal effects for each covariate. We show that the distinct values in this set can be consistently estimated by an algorithm that uses only local information of the graph. This local approach is computationally fast and feasible in high-dimensional problems. We propose to use summary measures of the set of possible causal effects to determine variable importance. In particular, we use the minimum absolute value of this set, since that is a lower bound on the size of the causal effect. We demonstrate the merits of our methods in a simulation study and on a data set about riboflavin production.) <|cite_end|>, GIES <|cite_start|> (Reference: Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs: The investigation of directed acyclic graphs (DAGs) encoding the same Markov property, that is the same conditional independence relations of multivariate observational distributions, has a long tradition; many algorithms exist for model selection and structure learning in Markov equivalence classes. In this paper, we extend the notion of Markov equivalence of DAGs to the case of interventional distributions arising from multiple intervention experiments. We show that under reasonable assumptions on the intervention experiments, interventional Markov equivalence defines a finer partitioning of DAGs than observational Markov equivalence and hence improves the identifiability of causal models. We give a graph theoretic criterion for two DAGs being Markov equivalent under interventions and show that each interventional Markov equivalence class can, analogously to the observational case, be uniquely represented by a chain graph called interventional essential graph (also known as CPDAG in the observational case). These are key insights for deriving a generalization of the Greedy Equivalence Search algorithm aimed at structure learning from interventional data. This new algorithm is evaluated in a simulation study.) <|cite_end|>, RFCI <|cite_start|> (Reference: Learning high-dimensional directed acyclic graphs with latent and selection variables: We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.) <|cite_end|>, ICP <|cite_start|> (Reference: Causal inference by using invariant prediction:
identification and confidence intervals: What is the difference between a prediction that is made with a causal model and that with a non‐causal model? Suppose that we intervene on the predictor variables or change the whole environment. The predictions from a causal model will in general work as well under interventions as for observational data. In contrast, predictions from a non‐causal model can potentially be very wrong if we actively intervene on variables. Here, we propose to exploit this invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) we collect all models that do show invariance in their predictive accuracy across settings and interventions. The causal model will be a member of this set of models with high probability. This approach yields valid confidence intervals for the causal relationships in quite general scenarios. We examine the example of structural equation models in more detail and provide sufficient assumptions under which the set of causal predictors becomes identifiable. We further investigate robustness properties of our approach under model misspecification and discuss possible extensions. The empirical properties are studied for various data sets, including large‐scale gene perturbation experiments.) <|cite_end|> and MRCL <|cite_start|> (Reference: Causal learning via manifold regularization: This paper frames causal structure estimation as a machine learning task. The idea is to treat indicators of causal relationships between variables as ‘labels’ and to exploit available data on the variables of interest to provide features for the labelling task. Background scientific knowledge or any available interventional data provide labels on some causal relationships and the remainder are treated as unlabelled. To illustrate the key ideas, we develop a distance-based approach (based on bivariate histograms) within a manifold regularization framework. We present empirical results on three different biological data sets (including examples where causal effects can be verified by experimental intervention), that together demonstrate the efficacy and general nature of the approach as well as its simplicity from a user’s point of view.) <|cite_end|>.
However, learning causal structures from data
remains
challenging, particularly under conditions -- such as high dimensionality, limited data sizes,
presence of hidden variables etc. -- seen in many real-world problems.
In this paper, we propose a deep architecture for causal learning that is motivated in particular by questions involving high-dimensional biomedical data.
The approach we put forward operates within a paradigm
that views causal questions through the lens of expected loss or risk (see below).
The learners proposed allow for the integration of partial knowledge concerning a subset of causal relationships and then seek to generalize beyond what is initially known to learn relationships between all observed variables.
This corresponds to a common scientific use-case, in which some prior knowledge is available at the outset -- from previous experiments or scientific background knowledge -- but where the aim is to go beyond what is known to learn a model spanning all available variables.
Much of the literature in learning causal structures involves statistical formulations that allow explicit description of the relevant data-generating distributions (including both observational and interventional distributions) and are in that sense ``generative" \citep[see, e.g.,][and references therein]{Heinze2018}. Taking a different approach, a number of recent papers, including <|cite_start|> (Reference: Towards a learning theory of cause-effect inference: The first step towards the deployment of our learning setup is to guarantee the existence of a measure on the space μk(P)⇥L, where μk(P) = {μk(P ) : P 2 P} ✓ Hk is the set of kernel mean embeddings associated with the measures in P . The following lemma provides such guarantee. This allows the analysis within the rest of this Section on μk(P)⇥ L. Lemma 2. Let (Z, ⌧Z) and (L, ⌧L) be two separable topological spaces. Let P be the set of all Borel probability measures on (Z,B(⌧Z)). Let μk(P) = {μk(P ) : P 2 P} ✓ Hk, where μk is the kernel mean embedding (1) associated to some bounded continuous kernel function k : Z ⇥ Z ! R. Then, there exists a measure on μk(P)⇥ L.) <|cite_end|> <|cite_start|> (Reference: Distinguishing cause from effect using observational data: methods and benchmarks: The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X, Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: Additive Noise Methods (ANM) and Information Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs that consists of data for 100 different cause-effect pairs selected from 37 datasets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the "ground truth" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the additive-noise method originally proposed by Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of 0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.) <|cite_end|> <|cite_start|> (Reference: Causal learning via manifold regularization: This paper frames causal structure estimation as a machine learning task. The idea is to treat indicators of causal relationships between variables as ‘labels’ and to exploit available data on the variables of interest to provide features for the labelling task. Background scientific knowledge or any available interventional data provide labels on some causal relationships and the remainder are treated as unlabelled. To illustrate the key ideas, we develop a distance-based approach (based on bivariate histograms) within a manifold regularization framework. We present empirical results on three different biological data sets (including examples where causal effects can be verified by experimental intervention), that together demonstrate the efficacy and general nature of the approach as well as its simplicity from a user’s point of view.) <|cite_end|> <|cite_start|> (Reference: Ancestral causal learning in high dimensions with a human genome-wide application: We consider learning ancestral causal relationships in high dimensions. Our approach is driven by a supervised learning perspective, with discrete indicators of causal relationships treated as labels to be learned from available data. We focus on the setting in which some causal (ancestral) relationships are known (via background knowledge or experimental data) and put forward a general approach that scales to large problems. This is motivated by problems in human biology which are characterized by high dimensionality and potentially many latent variables. We present a case study involving interventional data from human cells with total dimension $p \! \sim \! 19{,}000$. Performance is assessed empirically by testing model output against previously unseen interventional data. The proposed approach is highly effective and demonstrably scalable to the human genome-wide setting. We consider sensitivity to background knowledge and find that results are robust to nontrivial perturbations of the input information. We consider also the case, relevant to some applications, where the only prior information available concerns a small number of known ancestral relationships.) <|cite_end|>, have considered learning discrete indicators of causal relationships between variables (without necessarily learning full details of the underlying data-generating models) and this is related to notions of causal expected loss or risk <|cite_start|> (Reference: Evaluation of causal structure learning algorithms via risk estimation: Recent years have seen many advances in methods for causal structure learning from data. The empirical assessment of such methods, however, is much less developed. Motivated by this gap, we pose the following question: how can one assess, in a given problem setting, the practical efficacy of one or more causal structure learning methods? We formalize the problem in a decision-theoretic framework, via a notion of expected loss or risk for the causal setting. We introduce a theoretical notion of causal risk as well as sample quantities that can be computed from data, and study the relationship between the two, both theoretically and through an extensive simulation study. Our results provide an assumptions-light framework for assessing causal structure learning methods that can be applied in a range of practical use-cases.) <|cite_end|>. Such indicators may encode for example, whether, for a pair of variables $A$ and $B$, $A$ has a causal influence on $B$, $B$ on $A$, or neither.
The approach we propose, called ``Deep Discriminative Causal Learning'' (D\textsuperscript{2}CL),
is in the latter vein.
We consider a version of the causal structure learning problem in which the desired output consists of binary indicators of causal relationships between observed variables <|cite_start|> (Reference: Causal learning via manifold regularization: This paper frames causal structure estimation as a machine learning task. The idea is to treat indicators of causal relationships between variables as ‘labels’ and to exploit available data on the variables of interest to provide features for the labelling task. Background scientific knowledge or any available interventional data provide labels on some causal relationships and the remainder are treated as unlabelled. To illustrate the key ideas, we develop a distance-based approach (based on bivariate histograms) within a manifold regularization framework. We present empirical results on three different biological data sets (including examples where causal effects can be verified by experimental intervention), that together demonstrate the efficacy and general nature of the approach as well as its simplicity from a user’s point of view.) <|cite_end|> <|cite_start|> (Reference: Evaluation of causal structure learning algorithms via risk estimation: Recent years have seen many advances in methods for causal structure learning from data. The empirical assessment of such methods, however, is much less developed. Motivated by this gap, we pose the following question: how can one assess, in a given problem setting, the practical efficacy of one or more causal structure learning methods? We formalize the problem in a decision-theoretic framework, via a notion of expected loss or risk for the causal setting. We introduce a theoretical notion of causal risk as well as sample quantities that can be computed from data, and study the relationship between the two, both theoretically and through an extensive simulation study. Our results provide an assumptions-light framework for assessing causal structure learning methods that can be applied in a range of practical use-cases.) <|cite_end|>, which can be represented as a directed graph with nodes corresponding to the variables. Available multivariate data $X$ are transformed to provide inputs to a neural network whose outputs are estimates of the causal indicators.
As detailed below, D\textsuperscript{2}CL has several differences to classical causal structure learning (e.g.\ based on causal graphical models). First,
the objective is different: rather than giving access to all interventional distributions, D\textsuperscript{2}CL outputs indicators of causal links.
Second, D\textsuperscript{2}CL is highly non-parametric, relying on the learners to detect relevant regularities.
Third, D\textsuperscript{2}CL is demonstrably scalable to large numbers of variables (and is in fact unsuitable for small problems spanning only a few variables, see Discussion).
The assumptions underlying the approach are also different in nature from the kinds of assumptions usually made in causal structure learning and concern higher-level regularities in the data-generating processes, as discussed further below.
The remainder of the paper is organized as follows. We first introduce the D\textsuperscript{2}CL
methodology.
We then present empirical results, on both synthetic, gold-standard problems and on real molecular biological data. In the latter case, model results are systematically checked against entirely unseen interventional experiments. Finally, we discuss open questions and limitations. <|paper_end|> | [
"<|reference_start|> Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs: The investigation of directed acyclic graphs (DAGs) encoding the same Markov property, that is the same conditional independence relations of multivariate observational distributions, has a long tradition; many algorithms exist for model selection and structure learning in Markov equivalence classes. In this paper, we extend the notion of Markov equivalence of DAGs to the case of interventional distributions arising from multiple intervention experiments. We show that under reasonable assumptions on the intervention experiments, interventional Markov equivalence defines a finer partitioning of DAGs than observational Markov equivalence and hence improves the identifiability of causal models. We give a graph theoretic criterion for two DAGs being Markov equivalent under interventions and show that each interventional Markov equivalence class can, analogously to the observational case, be uniquely represented by a chain graph called interventional essential graph (also known as CPDAG in the observational case). These are key insights for deriving a generalization of the Greedy Equivalence Search algorithm aimed at structure learning from interventional data. This new algorithm is evaluated in a simulation study. <|reference_end|>",
"<|reference_start|> Causal learning via manifold regularization: This paper frames causal structure estimation as a machine learning task. The idea is to treat indicators of causal relationships between variables as ‘labels’ and to exploit available data on the variables of interest to provide features for the labelling task. Background scientific knowledge or any available interventional data provide labels on some causal relationships and the remainder are treated as unlabelled. To illustrate the key ideas, we develop a distance-based approach (based on bivariate histograms) within a manifold regularization framework. We present empirical results on three different biological data sets (including examples where causal effects can be verified by experimental intervention), that together demonstrate the efficacy and general nature of the approach as well as its simplicity from a user’s point of view. <|reference_end|>",
"<|reference_start|> Towards a learning theory of cause-effect inference: The first step towards the deployment of our learning setup is to guarantee the existence of a measure on the space μk(P)⇥L, where μk(P) = {μk(P ) : P 2 P} ✓ Hk is the set of kernel mean embeddings associated with the measures in P . The following lemma provides such guarantee. This allows the analysis within the rest of this Section on μk(P)⇥ L. Lemma 2. Let (Z, ⌧Z) and (L, ⌧L) be two separable topological spaces. Let P be the set of all Borel probability measures on (Z,B(⌧Z)). Let μk(P) = {μk(P ) : P 2 P} ✓ Hk, where μk is the kernel mean embedding (1) associated to some bounded continuous kernel function k : Z ⇥ Z ! R. Then, there exists a measure on μk(P)⇥ L. <|reference_end|>",
"<|reference_start|> Evaluation of causal structure learning algorithms via risk estimation: Recent years have seen many advances in methods for causal structure learning from data. The empirical assessment of such methods, however, is much less developed. Motivated by this gap, we pose the following question: how can one assess, in a given problem setting, the practical efficacy of one or more causal structure learning methods? We formalize the problem in a decision-theoretic framework, via a notion of expected loss or risk for the causal setting. We introduce a theoretical notion of causal risk as well as sample quantities that can be computed from data, and study the relationship between the two, both theoretically and through an extensive simulation study. Our results provide an assumptions-light framework for assessing causal structure learning methods that can be applied in a range of practical use-cases. <|reference_end|>"
] | [
3,
6,
7,
11
] | {"<|cite_1|>": "ss-1456284", "<|cite_2|>": "ss-996926", "<|cite_3|>": "ss-1255044", "<|cite_4|>": "arxiv-20739", "<|cite_5|>": "arxiv-21093", "<|cite_6|>": "ss-1349868", "<|cite_7|>": "ss-1564373", "<|multi_cite_10_1|>": "ss-2537410", "<|multi_cite_10_2|>": "arxiv-70047", "<|multi_cite_10_3|>": "ss-1564373", "<|multi_cite_10_4|>": "arxiv-206300", "<|cite_8|>": "ss-2075989", "<|multi_cite_9_1|>": "ss-1564373", "<|multi_cite_9_2|>": "ss-2075989"} |
2409.12099 | <|paper_start|> Title: Brain-Streams: fMRI-to-Image Reconstruction with Multi-modal Guidance
Abstract: Brain-Streams: fMRI-to-Image Reconstruction with Multi-modal Guidance: Understanding how humans process visual information is one of the crucial steps for unraveling the underlying mechanism of brain activity. Recently, this curiosity has motivated the fMRI-to-image reconstruction task; given the fMRI data from visual stimuli, it aims to reconstruct the corresponding visual stimuli. Surprisingly, leveraging powerful generative models such as the Latent Diffusion Model (LDM) has shown promising results in reconstructing complex visual stimuli such as high-resolution natural images from vision datasets. Despite the impressive structural fidelity of these reconstructions, they often lack details of small objects, ambiguous shapes, and semantic nuances. Consequently, the incorporation of additional semantic knowledge, beyond mere visuals, becomes imperative. In light of this, we exploit how modern LDMs effectively incorporate multi-modal guidance (text guidance, visual guidance, and image layout) for structurally and semantically plausible image generations. Specifically, inspired by the two-streams hypothesis suggesting that perceptual and semantic information are processed in different brain regions, our framework, Brain-Streams, maps fMRI signals from these brain regions to appropriate embeddings. That is, by extracting textual guidance from semantic information regions and visual guidance from perceptual information regions, Brain-Streams provides accurate multi-modal guidance to LDMs. We validate the reconstruction ability of Brain-Streams both quantitatively and qualitatively on a real fMRI dataset comprising natural image stimuli and fMRI data.
Introduction
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/Figure1.pdf}
\caption{Comparison of results demonstrating the impact of textual guidance on visual stimuli reconstruction.
In each triplet, the left column displays the original visual stimuli, while the middle and right columns present the reconstructed images without and with textual guidance, respectively. The predicted caption is generated using fMRI data.
Notably, textual guidance enhances the capture of accurate semantic details, such as glasses and the shape of a bird.}
\label{fig:comparison_w_imageonly}
\end{figure}
The human brain's ability to process and interpret visual information is a fundamental aspect of interaction with the world. Efforts to decode this complex process involve research aimed at reconstructing visual stimuli from fMRI data, obtained by exposing subjects to natural images. While simple image reconstructions achieve satisfactory results with basic models, such as linear mapping <|cite_start|> (Reference: Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders: ) <|cite_end|>, reconstructing complex natural images demands precise layout and accurate semantic details, challenging traditional fMRI-to-image mapping methods.
To address these challenges, there has been a shift towards leveraging expressive pretrained generative models, which possess the ability to perceive complex image details.
Starting from early GAN-based models <|cite_start|> (Reference: Mind Reader: Reconstructing complex images from brain activities: Understanding how the brain encodes external stimuli and how these stimuli can be decoded from the measured brain activities are long-standing and challenging questions in neuroscience. In this paper, we focus on reconstructing the complex image stimuli from fMRI (functional magnetic resonance imaging) signals. Unlike previous works that reconstruct images with single objects or simple shapes, our work aims to reconstruct image stimuli that are rich in semantics, closer to everyday scenes, and can reveal more perspectives. However, data scarcity of fMRI datasets is the main obstacle to applying state-of-the-art deep learning models to this problem. We find that incorporating an additional text modality is beneficial for the reconstruction problem compared to directly translating brain signals to images. Therefore, the modalities involved in our method are: (i) voxel-level fMRI signals, (ii) observed images that trigger the brain signals, and (iii) textual description of the images. To further address data scarcity, we leverage an aligned vision-language latent space pre-trained on massive datasets. Instead of training models from scratch to find a latent space shared by the three modalities, we encode fMRI signals into this pre-aligned latent space. Then, conditioned on embeddings in this space, we reconstruct images with a generative model. The reconstructed images from our pipeline balance both naturalness and fidelity: they are photo-realistic and capture the ground truth image contents well.) <|cite_end|> <|cite_start|> (Reference: Decoding natural image stimuli from fMRI data with a surface-based convolutional network: Due to the low signal-to-noise ratio and limited resolution of functional MRI data, and the high complexity of natural images, reconstructing a visual stimulus from human brain fMRI measurements is a challenging task. In this work, we propose a novel approach for this task, which we call Cortex2Image, to decode visual stimuli with high semantic fidelity and rich fine-grained detail. In particular, we train a surface-based convolutional network model that maps from brain response to semantic image features first (Cortex2Semantic). We then combine this model with a high-quality image generator (Instance-Conditioned GAN) to train another mapping from brain response to fine-grained image features using a variational approach (Cortex2Detail). Image reconstructions obtained by our proposed method achieve state-of-the-art semantic fidelity, while yielding good fine-grained similarity with the ground-truth stimulus. Our code is available at: https://github.com/zijin-gu/meshconv-decoding.git.) <|cite_end|> <|cite_start|> (Reference: A Style-Based Generator Architecture for Generative Adversarial Networks: We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.) <|cite_end|> <|cite_start|> (Reference: Instance-Conditioned GAN: Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces. Yet, modeling complex distributions of datasets such as ImageNet and COCO-Stuff remains challenging in unconditional settings. In this paper, we take inspiration from kernel density estimation techniques and introduce a non-parametric approach to modeling distributions of complex datasets. We partition the data manifold into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors, and introduce a model, called instance-conditioned GAN (IC-GAN), which learns the distribution around each datapoint. Experimental results on ImageNet and COCO-Stuff show that IC-GAN significantly improves over unconditional models and unsupervised data partitioning baselines. Moreover, we show that IC-GAN can effortlessly transfer to datasets not seen during training by simply changing the conditioning instances, and still generate realistic images. Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-LT. Code and trained models to reproduce the reported results are available at https://github.com/facebookresearch/ic_gan.) <|cite_end|>, more recent studies using the Latent Diffusion Model (LDM) <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|>, known for its training on large datasets and multi-modal capabilities that allow processing of various data types, have proven to be well-suited for handling natural images.
For instance, Takagi et al. <|cite_start|> (Reference: High-Resolution Image Reconstruction with Latent Diffusion Models from Human Brain Activity: Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance. We also characterize the inner mechanisms of the LDM by studying how its different components (such as the latent vector of image Z, conditioning inputs C, and different elements of the denoising U-Net) relate to distinct brain functions. We show that our proposed method can reconstruct high-resolution images with high fidelity in straight-forward fashion, without the need for any additional training and fine-tuning of complex deep-learning models. We also provide a quantitative interpretation of different LDM components from a neuroscientific perspective. Overall, our study proposes a promising method for reconstructing images from human brain activity, and provides a new framework for understanding DMs. Please check out our webpage at https://sites.google.com/view/stablediffusion-with-brain/.) <|cite_end|> reconstructed visual stimuli using Stable Diffusion (SD) <|cite_start|> (Reference: High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion .) <|cite_end|> by fitting fMRI data to the elements of SD.
Brain-Diffuser <|cite_start|> (Reference: Brain-Diffuser: Natural scene reconstruction from fMRI signals using generative latent diffusion: In neural decoding research, one of the most intriguing topics is the reconstruction of perceived natural images based on fMRI signals. Previous studies have succeeded in re-creating different aspects of the visuals, such as low-level properties (shape, texture, layout) or high-level features (category of objects, descriptive semantics of scenes) but have typically failed to reconstruct these properties together for complex scene images. Generative AI has recently made a leap forward with latent diffusion models capable of generating high-complexity images. Here, we investigate how to take advantage of this innovative technology for brain decoding. We present a two-stage scene reconstruction framework called “Brain-Diffuser”. In the first stage, starting from fMRI signals, we reconstruct images that capture low-level properties and overall layout using a VDVAE (Very Deep Variational Autoencoder) model. In the second stage, we use the image-to-image framework of a latent diffusion model (Versatile Diffusion) conditioned on predicted multimodal (text and visual) features, to generate final reconstructed images. On the publicly available Natural Scenes Dataset benchmark, our method outperforms previous models both qualitatively and quantitatively. When applied to synthetic fMRI patterns generated from individual ROI (region-of-interest) masks, our trained model creates compelling “ROI-optimal” scenes consistent with neuroscientific knowledge. Thus, the proposed methodology can have an impact on both applied (e.g. brain-computer interface) and fundamental neuroscience.) <|cite_end|> maps fMRI data to VD-VAE <|cite_start|> (Reference: Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images: We present a hierarchical VAE that, for the first time, generates samples quickly while outperforming the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at https://github.com/openai/vdvae.) <|cite_end|> to generate an initial low-level layout image, followed by reconstruction with Versatile Diffusion (VD) <|cite_start|> (Reference: Versatile Diffusion: Text, Images and Variations All in One Diffusion Model: Recent advances in diffusion models have set an impressive milestone in many generation tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-task multimodal network, dubbed Versatile Diffusion (VD), that handles multiple flows of text-to-image, image-to-text, and variations in one unified model. The pipeline design of VD instantiates a unified multi-flow diffusion framework, consisting of sharable and swappable layer modules that enable the crossmodal generality beyond images and text. Through extensive experiments, we demonstrate that VD successfully achieves the following: a) VD outperforms the baseline approaches and handles all its base tasks with competitive quality; b) VD enables novel extensions such as disentanglement of style and semantics, dual- and multi-context blending, etc.; c) The success of our multi-flow multimodal framework over images and text may inspire further diffusion-based universal AI research. Our code and models are open-sourced at https://github.com/SHI-Labs/Versatile-Diffusion.) <|cite_end|> using CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> embeddings.
MindEye <|cite_start|> (Reference: Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors: We present MindEye, a novel fMRI-to-image approach to retrieve and reconstruct viewed images from brain activity. Our model comprises two parallel submodules that are specialized for retrieval (using contrastive learning) and reconstruction (using a diffusion prior). MindEye can map fMRI brain activity to any high dimensional multimodal latent space, like CLIP image space, enabling image reconstruction using generative models that accept embeddings from this latent space. We comprehensively compare our approach with other existing methods, using both qualitative side-by-side comparisons and quantitative evaluations, and show that MindEye achieves state-of-the-art performance in both reconstruction and retrieval tasks. In particular, MindEye can retrieve the exact original image even among highly similar candidates indicating that its brain embeddings retain fine-grained image-specific information. This allows us to accurately retrieve images even from large-scale databases like LAION-5B. We demonstrate through ablations that MindEye's performance improvements over previous methods result from specialized submodules for retrieval and reconstruction, improved training techniques, and training models with orders of magnitude more parameters. Furthermore, we show that MindEye can better preserve low-level image features in the reconstructions by using img2img, with outputs from a separate autoencoder. All code is available on GitHub.) <|cite_end|> utilizes fMRI data and SD to produce low-level images, which are then conditioned by image embeddings generated from a diffusion prior, culminating in the final reconstruction with the img2img <|cite_start|> (Reference: SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing.) <|cite_end|> technique through VD.
Nonetheless, while the above approaches yield promising results, they fall short in accurately capturing semantic details, essential for the precise identification and understanding of specific objects within visual stimuli.
This issue can be observed in Fig.~\ref{fig:comparison_w_imageonly}.
In the center image of each triplet, crucial semantic details are absent (e.g., missing glasses in the bottom right image).
However, the accurate restoration of semantic details is enabled in the rightmost image of each triplet, where precise textual guidance is provided through predicted captions.
Therefore, to supplement missing semantic details in reconstructed images, our strategy involves providing multi-modal guidance, including precise textual guidance, to the LDM.
Building on this approach, we employ a method that offers the LDM three levels of multi-modal guidance: high-, mid-, and low-level.
High-level guidance introduces accurate semantic details (e.g., the class of entities and the presence of objects) through predicting captions, while the low-level focuses on the basic image layout.
The mid-level guidance incorporates both the rough semantic information and the perceptual features.
A key part of our pipeline is providing precise textual guidance, which correctly includes semantic information about visual stimuli.
To achieve this, our model generates captions using fMRI data and then refines these generated captions through a large language model (LLM). This process of creating precise textual guidance, combined with the two levels of multi-modal guidance ensure that VD receives perceptual and semantic details for visual stimuli reconstruction.
To derive these three levels of guidance from fMRI, we are inspired by \textit{the two-streams hypothesis} <|cite_start|> (Reference: Two visual systems: Brain mechanisms for localization and discrimination are dissociated by tectal a: hours. Furthermore, any treatment may adversely affect the delicate metabolic balance of the newborn, whereas fetal rats may tolerate artificial interference better, since their metabolism is buf-fered by that of the mother. The foregoing results (Fig. 5) demonstrate the possibility of producing newborns with a "precocious" enzyme pattern. Injection of fetuses with a combination of appropriate hormones that may extensively enhance biochemical differentiation could be looked upon as a way to shorten the necessary period of gestation. Such enhancement by the prenatally initiated formation of enzymes necessary for important liver functions may be of particular benefit to prematurely born animals. Summary The course of enzymic differentiation in liver can be altered in a positive, biologically meaningful direction by the administration of glucagon, epinephrine, and thyroxine to fetal rats in utero. The premature accumulations of specific enzymes occur within hours after such administration, are inhibited by actino-mycin, and provide a suitable system for studying the mechanism of gene expression. Glucagon and epinephrine are probably the natural stimuli for the formation of enzymes that accumulate precipitously during the hours immediately following birth. Their action may be mediated through cyclic AMP; dibutyryl cyclic AMP can evoke the appearance of tyrosine aminotransferase in fetal livers too young to respond to glucagon. Thyroxine is important in promoting aspects of enzymic differentiation that occur during late fetal life. Rats injected prenatally with thy-roxine were born with precociously elevated levels of liver enzymes. Such artificial stimulation of the course of enzyme differentiation during the fetal stage may facilitate the metabolic adjustment of newborn or prematurely born animals to extrauterine existence. Brain mechanisms for localization and discrimination are dissociated by tectal and cortical lesions. The term vision subsumes a complex variety of processes, thus, for fruitful scientific discussion, a reference to "vision" usually requires further specification. Likewise the term blindness is not self-defining. An animal or patient showing what appears to be total blindness under one set of conditions may reveal considerable visual capacity in a different situation. Such phenomena have led to discrepant conclusions in the literature on the neurological bases of vision, particularly on visual defects following various types of brain damage. The discrepancies have often been resolved through careful attention to stimulus conditions: variations in level of illumination, movement of stimuli, and type of pattern have led to the definition of particular types of partial blindness. However, the nature of the response has received less …) <|cite_end|> <|cite_start|> (Reference: Separate visual pathways for perception and action: ) <|cite_end|> <|cite_start|> (Reference: Two visual systems in the frog: After unilateral removal of the optic tectum in frogs, the cut optic tract regenerates to the remaining ipsilateral tectum. Although the orienting movementselicited by moving objects (food or threats) are now directed mirror-symmetrically to normal responses, these frogs correctly localize stationary objects as barriers. Apparently, thalamic and tectal visual mechanisms can operate independently.) <|cite_end|> <|cite_start|> (Reference: Two mechanisms of vision in primates: ) <|cite_end|>, which suggests that these levels of guidance can be individually extracted from specific brain regions.
\textbf{First}, the \texttt{ventral} visual cortex processes semantic information, such as the existence of objects and their classes.
\textbf{Second}, the \texttt{early} visual cortex, contains perceptual information related to the overall image, which is associated with the low-level aspects of the image.
\textbf{Third}, the \texttt{nsdgeneral} region, which covers regions incorporating aspects of both \texttt{ventral} and \texttt{early} visual cortex partially, containing comprehensive semantic and visual information.
Utilizing a brain region-specific approach, we efficiently extract semantic and perceptual information from fMRI, achieving highly accurate visual stimuli reconstruction.
\noindent
\textbf{Our contributions:}
\textbf{(1)} We propose a new fMRI-to-image reconstruction, Brain-Streams, that extracts three levels of guidance (high, mid, and low) from specific regions of the brain to offer multi-modal guidance to VD.
\textbf{(2)} We have made it possible to reconstruct not only the visual stimuli but also the corresponding captions refined by LLM providing detailed semantic information to VD.
\textbf{(3)} By employing the above method, we achieved state-of-the-art (SOTA) performance on the NSD dataset <|cite_start|> (Reference: A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence: ) <|cite_end|> for the visual stimuli reconstruction. <|paper_end|> | [
"<|reference_start|> High-Resolution Image Synthesis with Latent Diffusion Models: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion . <|reference_end|>",
"<|reference_start|> Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images: We present a hierarchical VAE that, for the first time, generates samples quickly while outperforming the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at https://github.com/openai/vdvae. <|reference_end|>",
"<|reference_start|> SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. <|reference_end|>",
"<|reference_start|> Separate visual pathways for perception and action: <|reference_end|>"
] | [
7,
9,
13,
15
] | {"<|cite_1|>": "ss-742390", "<|multi_cite_2_1|>": "arxiv-451036", "<|multi_cite_2_2|>": "arxiv-467481", "<|multi_cite_2_3|>": "arxiv-184253", "<|multi_cite_2_4|>": "arxiv-366165", "<|cite_3|>": "arxiv-388766", "<|cite_4|>": "ss-1341814", "<|cite_5|>": "arxiv-388766", "<|cite_6|>": "ss-2435424", "<|cite_7|>": "arxiv-304936", "<|cite_8|>": "arxiv-462345", "<|cite_9|>": "arxiv-323919", "<|cite_10|>": "ss-828741", "<|cite_11|>": "arxiv-358665", "<|multi_cite_12_1|>": "ss-2033790", "<|multi_cite_12_2|>": "ss-1319327", "<|multi_cite_12_3|>": "ss-890160", "<|multi_cite_12_4|>": "ss-2262583", "<|cite_13|>": "ss-1169131"} |
2405.04287 | <|paper_start|> Title: Asymmetry of Frequency Distribution in Power Systems: Sources, Impact and Control
Abstract: Asymmetry of Frequency Distribution in Power Systems: Sources, Impact and Control: This letter analyses the sources of asymmetry of frequency probability distributions (PDs) and their impact on the dynamic behaviour of power systems. The letter also discusses on how secondary control can reduce this asymmetry. We also propose an asymmetry index based on the difference between the left and right-hand side standard deviations of the frequency PDs. The IEEE 9-bus system and real-world data obtained from the Irish transmission system serve to show that losses, saturation's and wind generation lead to asymmetric PDs. A relevant result is that the droop-based frequency support provided by wind generation using a tight deadband of 15 mHz leads to significantly increase the asymmetry of the frequency PDs.
Introduction
\label{sec:intro}
The topic of frequency distribution in power systems and the various sources and parameters that influence that has recently received a lot of attention in the literature, in particular, in light of the integration of uncertain and variable renewable energy sources such as wind and solar generation <|cite_start|> (Reference: Towards realistic statistical models of the grid frequency: Increased share of renewable sources of energy in a power grid leads to larger deviations in grid frequency from the nominal value resulting in more challenging control and its modelling. In this paper we focus on the grid frequency for the power system of Great Britain because the large share of renewables makes it a template for other power grids in the future and because it exhibits peculiar statistical properties, such as long-term correlations in fluctuations, periodicity, bi-modality,and heavy tails in the distribution of the grid frequency. By modifications of the swing equation and the underlying noise statistics, which we justify qualitatively and quantitatively, we reproduce these peculiar statistical properties. We apply our model to realistic frequency response services and show our predictions outperform a standard swing equation model.) <|cite_end|> <|cite_start|> (Reference: Effects of inertia, load damping and dead-bands on frequency histograms and frequency control of power systems: ) <|cite_end|> <|cite_start|> (Reference: Data-driven model of the power-grid frequency dynamics: The energy system is rapidly changing to accommodate the increasing number of renewable generators and the general transition towards a more sustainable future. Simultaneously, business models and market designs evolve, affecting power-grid operation and power-grid frequency. Problems raised by this ongoing transition are increasingly addressed by transdisciplinary research approaches, ranging from purely mathematical modelling to applied case studies. These approaches require a stochastic description of consumer behaviour, fluctuations by renewables, market rules, and how they influence the stability of the power-grid frequency. Here, we introduce an easy-to-use, data-driven, stochastic model for the power-grid frequency and demonstrate how it reproduces key characteristics of the observed statistics of the Continental European and British power grids. Using data analysis tools and a Fokker–Planck approach, we estimate parameters of our deterministic and stochastic model. We offer executable code and guidelines on how to use the model on any power grid for various mathematical or engineering applications.) <|cite_end|> <|cite_start|> (Reference: Deadbands, droop, and inertia impact on power system frequency distribution: Power system inertia is falling as more energy is supplied by renewable generators, and there are concerns about the frequency controls required to guarantee satisfactory system performance. The majority of research into the negative effect of low inertia has focused on poor dynamic response following major disturbances, when the transient frequency dip can become unacceptable. However, another important practical concern—keeping average frequency deviations within acceptable limits—was mainly out of the sight of the research community. In this manuscript, we present a method for finding the frequency probability density function (PDF) for a given power system. We pass from an initial stochastic dynamic model to deterministic equations for the frequency PDF, which are analyzed to uncover key system parameters influencing frequency deviations. We show that system inertia has little effect on the frequency PDF, making virtual inertia services insufficient for keeping frequency close to nominal under ambient load fluctuations. We establish that aggregate system droop and deadband width are the only parameters that have major influence on the average frequency deviations, suggesting that energy storage might be an excellent solution for tight frequency regulation. We also show that changing the governor deadband width does not significantly affect generator movement.) <|cite_end|>.
The main focus of these works is centred around the modelling, study and reproduction of frequency distribution seen in real grids such as the bi-modal distribution. However, the effect of losses, saturation and renewable sources providing dynamic frequency regulation has not been considered so far. This work fills this gap and provides the following contributions.
\begin{itemize}
\item A study of the sources of asymmetry in power systems, such as losses, saturation, and wind generation providing Primary Frequency Control (PFC) and Active Power Control (APC). The latter is a PFC with a tight (15 mHz) deadband.
\item A metric to quantify the level of asymmetry in power systems. This metric is the difference between left and right standard deviations of the Probability Density (PD) of the system frequency.
\item Show through dynamic stochastic simulations and real-world data obtained from the Irish transmission grid that wind generation is a source of asymmetry and that this asymmetry can be reduced with Automatic Generation Control (AGC).
\end{itemize} <|paper_end|> | [
"<|reference_start|> Towards realistic statistical models of the grid frequency: Increased share of renewable sources of energy in a power grid leads to larger deviations in grid frequency from the nominal value resulting in more challenging control and its modelling. In this paper we focus on the grid frequency for the power system of Great Britain because the large share of renewables makes it a template for other power grids in the future and because it exhibits peculiar statistical properties, such as long-term correlations in fluctuations, periodicity, bi-modality,and heavy tails in the distribution of the grid frequency. By modifications of the swing equation and the underlying noise statistics, which we justify qualitatively and quantitatively, we reproduce these peculiar statistical properties. We apply our model to realistic frequency response services and show our predictions outperform a standard swing equation model. <|reference_end|>",
"<|reference_start|> Effects of inertia, load damping and dead-bands on frequency histograms and frequency control of power systems: <|reference_end|>",
"<|reference_start|> Data-driven model of the power-grid frequency dynamics: The energy system is rapidly changing to accommodate the increasing number of renewable generators and the general transition towards a more sustainable future. Simultaneously, business models and market designs evolve, affecting power-grid operation and power-grid frequency. Problems raised by this ongoing transition are increasingly addressed by transdisciplinary research approaches, ranging from purely mathematical modelling to applied case studies. These approaches require a stochastic description of consumer behaviour, fluctuations by renewables, market rules, and how they influence the stability of the power-grid frequency. Here, we introduce an easy-to-use, data-driven, stochastic model for the power-grid frequency and demonstrate how it reproduces key characteristics of the observed statistics of the Continental European and British power grids. Using data analysis tools and a Fokker–Planck approach, we estimate parameters of our deterministic and stochastic model. We offer executable code and guidelines on how to use the model on any power grid for various mathematical or engineering applications. <|reference_end|>",
"<|reference_start|> Deadbands, droop, and inertia impact on power system frequency distribution: Power system inertia is falling as more energy is supplied by renewable generators, and there are concerns about the frequency controls required to guarantee satisfactory system performance. The majority of research into the negative effect of low inertia has focused on poor dynamic response following major disturbances, when the transient frequency dip can become unacceptable. However, another important practical concern—keeping average frequency deviations within acceptable limits—was mainly out of the sight of the research community. In this manuscript, we present a method for finding the frequency probability density function (PDF) for a given power system. We pass from an initial stochastic dynamic model to deterministic equations for the frequency PDF, which are analyzed to uncover key system parameters influencing frequency deviations. We show that system inertia has little effect on the frequency PDF, making virtual inertia services insufficient for keeping frequency close to nominal under ambient load fluctuations. We establish that aggregate system droop and deadband width are the only parameters that have major influence on the average frequency deviations, suggesting that energy storage might be an excellent solution for tight frequency regulation. We also show that changing the governor deadband width does not significantly affect generator movement. <|reference_end|>"
] | [
0,
1,
2,
3
] | {"<|multi_cite_1_1|>": "arxiv-335607", "<|multi_cite_1_2|>": "ss-2591267", "<|multi_cite_1_3|>": "ss-2591268", "<|multi_cite_1_4|>": "ss-2491110"} |
2101.08698 | <|paper_start|> Title: Validating Label Consistency in NER Data Annotation
Abstract: Validating Label Consistency in NER Data Annotation: Data annotation plays a crucial role in ensuring your named entity recognition (NER) projects are trained with the right information to learn from. Producing the most accurate labels is a challenge due to the complexity involved with annotation. Label inconsistency between multiple subsets of data annotation (e.g., training set and test set, or multiple training subsets) is an indicator of label mistakes. In this work, we present an empirical method to explore the relationship between label (in-)consistency and NER model performance. It can be used to validate the label consistency (or catches the inconsistency) in multiple sets of NER data annotation. In experiments, our method identified the label inconsistency of test data in SCIERC and CoNLL03 datasets (with 26.7% and 5.4% label mistakes). It validated the consistency in the corrected version of both datasets.
Introduction
\label{sec:introduction}
\begin{table*}[t]
\centering
\caption{Three examples to compare original and corrected annotation in the test set of the SCIERC dataset. If the annotation on the test set consistently followed the ``codebook'' that was used to annotate training data, the entities in the first two examples would be labelled as ``Task'' (not ``Method'') for sure.}
\label{tab:stoa-result}
\scalebox{0.82}{
\linespread{1.08}
\begin{tabular}{p{9.2cm}|p{9.3cm}}
\toprule
\centering{\textbf{Original Examples}} & \multicolumn{1}{c}{\textbf{Corrected Examples}} \\ \hline
\normalsize{Starting from a DP-based solution to the \textbf{\textred{[traveling salesman problem]}}}\small{\textbf{\textgreen{Method}}}\normalsize{, we present a novel technique ...} & \normalsize{Starting from a DP-based solution to the \textred{\textbf{[traveling salesman problem]}}}\small{\textbf{\textblue{Task}}}\normalsize{, we present a novel technique ...} \\ \hline
\normalsize{FERRET utilizes a novel approach to \textbf{\textred{[Q/A]}}}\small{\textbf{\textgreen{Method}}} \normalsize{known as predictive questioning which attempts to identify ...} & \normalsize{FERRET utilizes a novel approach to \textbf{\textred{[Q/A]}}}\small{\textbf{\textblue{Task}}} \normalsize{known as predictive questioning which attempts to identify ...} \\ \hline
\normalsize{The goal of this work is the enrichment of \textbf{\textred{[human-machine interactions]}}}\small{\textbf{\textblue{Task}}} \normalsize{in a natural language environment.} & \normalsize{The goal of this work is the \textbf{\textred{[enrichment of human-machine interactions]}}}\small{\textbf{\textblue{Task}}} \normalsize{in a natural language environment.} \\
\bottomrule
\end{tabular}}
\label{tab:case-study}
\end{table*}
\begin{figure*}[t]
\centering
{\includegraphics[width=\textwidth]{figure/id_test_mistake.pdf}}
\vspace{-0.3in}
\caption{\emph{Identifying label inconsistency of test set with training set:} We sample \emph{three} exclusive subsets (of size $x$) from the training set (\textorange{orange}, \textgreen{green}, and \textblue{blue}). We use one subset as the \emph{new} test set (\textorange{orange}). We apply the \textsc{SCIIE} NER model on the new test set. We build three \emph{new} training sets: \emph{i)} ``TrainTest'' (\textblue{blue}-\textred{red}), \emph{ii)} ``PureTrain'' (\textgreen{green}-\textblue{blue}), \emph{iii)} ``TestTrain'' (\textred{red}-\textblue{blue}). Results on SCIERC show that the test set (\textred{red}) is \textit{less predictive} of training samples (\textorange{orange}) than the training set itself (\textblue{blue} or \textgreen{green}). This was not observed on two other datasets.}
\label{fig:id_test_mistake}
\end{figure*}
Named entity recognition (NER) is one of the foundations of many downstream tasks such as relation extraction, event detection, and knowledge graph construction. NER models require vast amounts of labeled data to learn and identify patterns that humans cannot continuously. It is really about getting accurate data to train the models. When end-to-end neural models achieve excellent performance on NER in various domains <|cite_start|> (Reference: Neural Architectures for Named Entity Recognition: State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.) <|cite_end|> <|cite_start|> (Reference: Empower Sequence Labeling with Task-Aware Neural Language Model: Linguistic sequence labeling is a general modeling approach that encompasses a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71$\pm$0.10 without using any extra annotation.) <|cite_end|> <|cite_start|> (Reference: Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction: We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.) <|cite_end|> <|cite_start|> (Reference: Tri-Train: Automatic Pre-Fine Tuning between Pre-Training and Fine-Tuning for SciNER: The training process of scientific NER models is commonly performed in two steps: i) Pre-training a language model by self-supervised tasks on huge data and ii) fine-tune training with small labelled data. The success of the strategy depends on the relevance between the data domains and between the tasks. However, gaps are found in practice when the target domains are specific and small. We propose a novel framework to introduce a “pre-fine tuning” step between pre-training and fine-tuning. It constructs a corpus by selecting sentences from unlabeled documents that are the most relevant with the labelled training data. Instead of predicting tokens in random spans, the pre-fine tuning task is to predict tokens in entity candidates identified by text mining methods. Pre-fine tuning is automatic and light-weight because the corpus size can be much smaller than pre-training data to achieve a better performance. Experiments on seven benchmarks demonstrate the effectiveness.) <|cite_end|> <|cite_start|> (Reference: Enhancing Taxonomy Completion with Concept Generation via Fusing Relational Representations: Automatic construction of a taxonomy supports many applications in e-commerce, web search, and question answering. Existing taxonomy expansion or completion methods assume that new concepts have been accurately extracted and their embedding vectors learned from the text corpus. However, one critical and fundamental challenge in fixing the incompleteness of taxonomies is the incompleteness of the extracted concepts, especially for those whose names have multiple words and consequently low frequency in the corpus. To resolve the limitations of extraction-based methods, we propose GenTaxo to enhance taxonomy completion by identifying positions in existing taxonomies that need new concepts and then generating appropriate concept names. Instead of relying on the corpus for concept embeddings, GenTaxo learns the contextual embeddings from their surrounding graph-based and language-based relational information, and leverages the corpus for pre-training a concept name generator. Experimental results demonstrate that GenTaxo improves the completeness of taxonomies over existing methods.) <|cite_end|>, building useful and challenging NER benchmarks, such as CoNLL03, WNUT16, and SCIERC, contributes significantly to the research community.
Data annotation plays a crucial role in building benchmarks and ensuring NLP models are trained with the correct information to learn from <|cite_start|> (Reference: Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction: We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.) <|cite_end|> <|cite_start|> (Reference: Biomedical knowledge graphs construction from conditional statements: Conditions play an essential role in biomedical statements. However, existing biomedical knowledge graphs (BioKGs) only focus on factual knowledge, organized as a flat relational network of biomedical concepts. These BioKGs ignore the conditions of the facts being valid, which loses essential contexts for knowledge exploration and inference. We consider both facts and their conditions in biomedical statements and proposed a three-layered information-lossless representation of BioKG. The first layer has biomedical concept nodes, attribute nodes. The second layer represents both biomedical fact and condition tuples by nodes of the relation phrases, connecting to the subject and object in the first layer. The third layer has nodes of statements connecting to a set of fact tuples and/or condition tuples in the second layer. We transform the BioKG construction problem into a sequence labeling problem based on a novel designed tag schema. We design a Multi-Input Multi-Output sequence labeling model (MIMO) that learns from multiple input signals and generates proper number of multiple output sequences for tuple extraction. Experiments on a newly constructed dataset show that MIMO outperforms the existing methods. Further case study demonstrates that the BioKGs constructed provide a good understanding of the biomedical statements.) <|cite_end|> <|cite_start|> (Reference: Identifying referential intention with heterogeneous contexts: Citing, quoting, and forwarding & commenting behaviors are widely seen in academia, news media, and social media. Existing behavior modeling approaches focused on mining content and describing preferences of authors, speakers, and users. However, behavioral intention plays an important role in generating content on the platforms. In this work, we propose to identify the referential intention which motivates the action of using the referred (e.g., cited, quoted, and retweeted) source and content to support their claims. We adopt a theory in sociology to develop a schema of four types of intentions. The challenge lies in the heterogeneity of observed contextual information surrounding the referential behavior, such as referred content (e.g., a cited paper), local context (e.g., the sentence citing the paper), neighboring context (e.g., the former and latter sentences), and network context (e.g., the academic network of authors, affiliations, and keywords). We propose a new neural framework with Interactive Hierarchical Attention (IHA) to identify the intention of referential behavior by properly aggregating the heterogeneous contexts. Experiments demonstrate that the proposed method can effectively identify the type of intention of citing behaviors (on academic data) and retweeting behaviors (on Twitter). And learning the heterogeneous contexts collectively can improve the performance. This work opens a door for understanding content generation from a fundamental perspective of behavior sciences.) <|cite_end|>. Producing the necessary annotation from any asset at scale is a challenge, mainly because of the complexity involved with annotation. Getting the most accurate labels demands time and expertise.
Label mistakes can hardly be avoided, especially when the labeling process splits the data into multiple sets for distributed annotation. The mistakes cause label inconsistency between subsets of annotated data (e.g., training set and test set or multiple training subsets). For example, in the CoNLL03 dataset <|cite_start|> (Reference: Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition: We describe the CoNLL-2003 shared task: language-independent named entity recognition. We give background information on the data sets (English and German) and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.) <|cite_end|>, a standard NER benchmark that has been cited over 2,300 times, label mistakes were found in 5.38\% of the test set <|cite_start|> (Reference: CrossWeigh: Training Named Entity Tagger from Imperfect Annotations: Everyone makes mistakes. So do human annotators when curating labels for named entity recognition (NER). Such label mistakes might hurt model training and interfere model comparison. In this study, we dive deep into one of the widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify label mistakes in about 5.38% test sentences, which is a significant ratio considering that the state-of-the-art test F1 score is already around 93%. Therefore, we manually correct these label mistakes and form a cleaner test set. Our re-evaluation of popular models on this corrected test set leads to more accurate assessments, compared to those on the original test set. More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. Specifically, it partitions the training data into several folds and train independent NER models to identify potential mistakes in each fold. Then it adjusts the weights of training data accordingly to train the final NER model. Extensive experiments demonstrate significant improvements of plugging various NER models into our proposed framework on three datasets. All implementations and corrected test set are available at our Github repo: https://github.com/ZihanWangKi/CrossWeigh.) <|cite_end|>. Note that the state-of-the-art results on CoNLL03 have achieved an F1 score of $\sim.93$. So even if the label mistakes make up a tiny part, they cannot be negligible when researchers are trying to improve the results further. In the work of Wang \emph{et al.}, five annotators were recruited to correct the label mistakes. Compared to the original test set results, the corrected test set results are more accurate and stable.
However, two critical issues were not resolved in this process: \emph{i)} How to identify label inconsistency between the subsets of annotated data? \emph{ii)} How to validate that the label consistency was recovered by the correction?
Another example is SCIERC <|cite_start|> (Reference: Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction: We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.) <|cite_end|> (cited $\sim$50 times) which is a multi-task (including NER) benchmark in AI domain. It has 1,861 sentences for training, 455 for dev, and 551 for test. When we looked at the false predictions given by \textsc{SCIIE} which was a multi-task model released along with the SCIERC dataset, we found that as many as 147 (26.7\% of the test set) sentences were not properly annotated. (We also recruited five annotators and counted a mistake when all the annotators report it.) Three examples are given in Table~\ref{tab:case-study}: two of them have wrong entity types; the third has a wrong span boundary. As shown in the experiments section, after the correction, the NER performance becomes more accurate and stable.
Besides the significant correction on the SCIERC dataset, our contributions in this work are as follows: \emph{i)} an empirical, visual method to identify the label inconsistency between subsets of annotated data (see Figure~\ref{fig:id_test_mistake}), \emph{ii)} a method to validate the label consistency of corrected data annotation (see Figure~\ref{fig:val_test_correct}). Experiments show that they are effective on the CoNLL03 and SCIERC datasets.
Related Work
\label{sec:related_work}
NER is typically cast as a sequence labeling problem and solved by models integrate LSTMs, CRF, and language models <|cite_start|> (Reference: Neural Architectures for Named Entity Recognition: State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.) <|cite_end|> <|cite_start|> (Reference: Empower Sequence Labeling with Task-Aware Neural Language Model: Linguistic sequence labeling is a general modeling approach that encompasses a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71$\pm$0.10 without using any extra annotation.) <|cite_end|> <|cite_start|> (Reference: Faceted hierarchy: A new graph type to organize scientific concepts and a construction method: On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet “type-of”. We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.) <|cite_end|> <|cite_start|> (Reference: Tri-Train: Automatic Pre-Fine Tuning between Pre-Training and Fine-Tuning for SciNER: The training process of scientific NER models is commonly performed in two steps: i) Pre-training a language model by self-supervised tasks on huge data and ii) fine-tune training with small labelled data. The success of the strategy depends on the relevance between the data domains and between the tasks. However, gaps are found in practice when the target domains are specific and small. We propose a novel framework to introduce a “pre-fine tuning” step between pre-training and fine-tuning. It constructs a corpus by selecting sentences from unlabeled documents that are the most relevant with the labelled training data. Instead of predicting tokens in random spans, the pre-fine tuning task is to predict tokens in entity candidates identified by text mining methods. Pre-fine tuning is automatic and light-weight because the corpus size can be much smaller than pre-training data to achieve a better performance. Experiments on seven benchmarks demonstrate the effectiveness.) <|cite_end|>. Another idea is to generate span candidates and predict their type. Span-based models have been proposed with multi-task learning strategies <|cite_start|> (Reference: Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction: We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.) <|cite_end|> <|cite_start|> (Reference: A General Framework for Information Extraction using Dynamic Span Graphs: We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs. The graphs are constructed by selecting the most confident entity spans and linking these nodes with confidence-weighted relation types and coreferences. The dynamic span graph allows coreference and relation type confidences to propagate through the graph to iteratively refine the span representations. This is unlike previous multi-task frameworks for information extraction in which the only interaction between tasks is in the shared first-layer LSTM. Our framework significantly outperforms the state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains. We further observe that the span enumeration approach is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.) <|cite_end|>. The multiple tasks include concept recognition, relation extraction, and co-reference resolution.
Researchers notice label mistakes in many NLP tasks <|cite_start|> (Reference: Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics?: ) <|cite_end|> <|cite_start|> (Reference: CrossWeigh: Training Named Entity Tagger from Imperfect Annotations: Everyone makes mistakes. So do human annotators when curating labels for named entity recognition (NER). Such label mistakes might hurt model training and interfere model comparison. In this study, we dive deep into one of the widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify label mistakes in about 5.38% test sentences, which is a significant ratio considering that the state-of-the-art test F1 score is already around 93%. Therefore, we manually correct these label mistakes and form a cleaner test set. Our re-evaluation of popular models on this corrected test set leads to more accurate assessments, compared to those on the original test set. More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. Specifically, it partitions the training data into several folds and train independent NER models to identify potential mistakes in each fold. Then it adjusts the weights of training data accordingly to train the final NER model. Extensive experiments demonstrate significant improvements of plugging various NER models into our proposed framework on three datasets. All implementations and corrected test set are available at our Github repo: https://github.com/ZihanWangKi/CrossWeigh.) <|cite_end|> <|cite_start|> (Reference: Detecting Errors within a Corpus using Anomaly Detection: We present a method for automatically detecting errors in a manually marked corpus using anomaly detection. Anomaly detection is a method for determining which elements of a large data set do not conform to the whole. This method fits a probability distribution over the data and applies a statistical test to detect anomalous elements. In the corpus error detection problem, anomalous elements are typically marking errors. We present the results of applying this method to the tagged portion of the Penn Treebank corpus.) <|cite_end|>. For instance, it is reported that the bottleneck of the POS tagging task is the consistency of the annotation result <|cite_start|> (Reference: Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics?: ) <|cite_end|>. People tried to detect label mistakes automatically and minimize the influence of noise in training. The mistake re-weighting mechanism is effective in the NER task <|cite_start|> (Reference: CrossWeigh: Training Named Entity Tagger from Imperfect Annotations: Everyone makes mistakes. So do human annotators when curating labels for named entity recognition (NER). Such label mistakes might hurt model training and interfere model comparison. In this study, we dive deep into one of the widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify label mistakes in about 5.38% test sentences, which is a significant ratio considering that the state-of-the-art test F1 score is already around 93%. Therefore, we manually correct these label mistakes and form a cleaner test set. Our re-evaluation of popular models on this corrected test set leads to more accurate assessments, compared to those on the original test set. More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. Specifically, it partitions the training data into several folds and train independent NER models to identify potential mistakes in each fold. Then it adjusts the weights of training data accordingly to train the final NER model. Extensive experiments demonstrate significant improvements of plugging various NER models into our proposed framework on three datasets. All implementations and corrected test set are available at our Github repo: https://github.com/ZihanWangKi/CrossWeigh.) <|cite_end|>. We focus on visually evaluating the label consistency. <|paper_end|> | [
"<|reference_start|> Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition: We describe the CoNLL-2003 shared task: language-independent named entity recognition. We give background information on the data sets (English and German) and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance. <|reference_end|>",
"<|reference_start|> CrossWeigh: Training Named Entity Tagger from Imperfect Annotations: Everyone makes mistakes. So do human annotators when curating labels for named entity recognition (NER). Such label mistakes might hurt model training and interfere model comparison. In this study, we dive deep into one of the widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify label mistakes in about 5.38% test sentences, which is a significant ratio considering that the state-of-the-art test F1 score is already around 93%. Therefore, we manually correct these label mistakes and form a cleaner test set. Our re-evaluation of popular models on this corrected test set leads to more accurate assessments, compared to those on the original test set. More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. Specifically, it partitions the training data into several folds and train independent NER models to identify potential mistakes in each fold. Then it adjusts the weights of training data accordingly to train the final NER model. Extensive experiments demonstrate significant improvements of plugging various NER models into our proposed framework on three datasets. All implementations and corrected test set are available at our Github repo: https://github.com/ZihanWangKi/CrossWeigh. <|reference_end|>",
"<|reference_start|> Empower Sequence Labeling with Task-Aware Neural Language Model: Linguistic sequence labeling is a general modeling approach that encompasses a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71$\\pm$0.10 without using any extra annotation. <|reference_end|>",
"<|reference_start|> Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics?: <|reference_end|>"
] | [
8,
9,
12,
20
] | {"<|multi_cite_1_1|>": "arxiv-93384", "<|multi_cite_1_2|>": "arxiv-134396", "<|multi_cite_1_3|>": "arxiv-170609", "<|multi_cite_1_4|>": "ss-2024663", "<|multi_cite_1_5|>": "arxiv-346028", "<|multi_cite_2_1|>": "arxiv-170609", "<|multi_cite_2_2|>": "ss-723689", "<|multi_cite_2_3|>": "ss-2263298", "<|cite_3|>": "arxiv-671186", "<|cite_4|>": "arxiv-221711", "<|cite_5|>": "arxiv-170609", "<|multi_cite_6_1|>": "arxiv-93384", "<|multi_cite_6_2|>": "arxiv-134396", "<|multi_cite_6_3|>": "ss-2131650", "<|multi_cite_6_4|>": "ss-2024663", "<|multi_cite_7_1|>": "arxiv-170609", "<|multi_cite_7_2|>": "arxiv-198555", "<|multi_cite_8_1|>": "ss-1455639", "<|multi_cite_8_2|>": "arxiv-221711", "<|multi_cite_8_3|>": "ss-1589908", "<|cite_9|>": "ss-1455639", "<|cite_10|>": "arxiv-221711"} |
1701.02273 | <|paper_start|> Title: Visual Multiple-Object Tracking for Unknown Clutter Rate
Abstract: Visual Multiple-Object Tracking for Unknown Clutter Rate: In multi-object tracking applications, model parameter tuning is a prerequisite for reliable performance. In particular, it is difficult to know statistics of false measurements due to various sensing conditions and changes in the field of views. In this paper we are interested in designing a multi-object tracking algorithm that handles unknown false measurement rate. Recently proposed robust multi-Bernoulli filter is employed for clutter estimation while generalized labeled multi-Bernoulli filter is considered for target tracking. Performance evaluation with real videos demonstrates the effectiveness of the tracking algorithm for real-world scenarios.
Introduction
\label{sec:intro} Multi-object tracking is one of fundamental problems in many applications. There are abundant research works, however, it is still far from practical use. The overwhelming majority of multi-target tracking algorithms are built on the assumption that multi-object system model parameters are known a priori, which is generally not the case in practice <|cite_start|> (Reference: Statistical Multisource-Multitarget Information Fusion: This comprehensive resource provides you with an in-depth understanding of finite-set statistics (FISST) - a recently developed method which unifies much of information fusion under a single probabilistic, in fact Bayesian, paradigm. The book helps you master FISST concepts, techniques, and algorithms, so you can use FISST to address real-world challenges in the field. You learn how to model, fuse, and process highly disparate information sources, and detect and track non-cooperative individual/platform groups and conventional non-cooperative targets. You find a rigorous Bayesian unification for many aspects of expert systems theory. Moreover, the book presents systematic integral and differential calculus for multisource-multitarget problems, providing a methodology for devising rigorous new techniques. This accessible and detailed book is supported with over 3,000 equations, 90 clear examples, 70 explanatory figures, and 60 exercises with solutions.) <|cite_end|>, <|cite_start|> (Reference: Tracking and data association: ) <|cite_end|>. While tracking performance is generally tolerant to mismatches in the dynamic and measurement noise, the same cannot be said about missed detections and false detections. In particular, mismatches in the specification of missed detection and false detection model parameters such as detection profile and clutter intensity can lead to a significant bias or even erroneous estimates. \newline
\indent Unfortunately, except for a few application areas, exact knowledge of model parameters is not available. This is especially true in visual tracking, in which the missed detection and false detection processes vary with the detection methods. The detection profile and clutter intensity are obtained by trial and error. A major problem is the time-varying nature of the missed detection and false detection processes. Consequently, there is no guarantee that the model parameters chosen from training data will be sufficient for the multi-object filter at subsequent frames.\newline
\indent In radar target tracking applications, stochastic multi-object tracking algorithms based on Kalman filtering or Sequential Monte Carlo (SMC) method have been widely used <|cite_start|> (Reference: Tracking and data association: ) <|cite_end|>, <|cite_start|> (Reference: An algorithm for Tracking Multiple Targets: An algorithm for tracking multiple targets in a cluttered environment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis, using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. This branching technique allows correlation of a measurement with its source based on subsequent, as well as previous, data. To keep the number of hypotheses reasonable, unlikely hypotheses are eliminated and hypotheses with similar target estimates are combined. To minimize computational requirements, the entire set of targets and measurements is divided into clusters that are solved independently. In an illustrative example of aircraft tracking, the algorithm successfully tracks targets over a wide range of conditions.) <|cite_end|>. This approach also has been used in visual multi-object tracking research <|cite_start|> (Reference: Online Multiperson Tracking-by-Detection from a Single, Uncalibrated
Camera: In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness.) <|cite_end|>, <|cite_start|> (Reference: Shape-based online multitarget tracking and detection for targets causing multiple measurements: Variational Bayesian clustering and lossless data association: This paper proposes a novel online two-level multitarget tracking and detection (MTTD) algorithm. The algorithm focuses on multitarget detection and tracking for the case of multiple measurements per target and for an unknown and varying number of targets. Information is continuously exchanged in both directions between the two levels. Using the high level target position and shape information, the low level clusters the measurements. Furthermore, the low level features automatic relevance detection (ARD), as it automatically determines the optimal number of clusters from the measurements taking into account the expected target shapes. The high level's data association allows for a varying number of targets. A joint probabilistic data association algorithm looks for associations between clusters of measurements and targets. These associations are used to update the target trackers and the target shapes with the individual measurements. No information is lost in the two-level approach since the measurement information is not summarized into features. The target trackers are based on an underlying motion model, while the high level is supplemented with a filter estimating the number of targets. The algorithm is verified using both simulations and experiments using two sensor modalities, video and laser scanner, for detection and tracking of people and ants.) <|cite_end|>, <|cite_start|> (Reference: Visual tracking in background subtracted image sequences via multi-Bernoulli filtering: This correspondence presents a novel method for simultaneous tracking of multiple non-stationary targets in video. Our method operates directly on the video data and does not require any detection. We propose a multi-target likelihood function for the background-subtracted grey-scale image data, which admits multi-target conjugate priors. This allows the multi-target posterior to be efficiently propagated forward using the multi-Bernoulli filter. Our method does not need any training pattern or target templates and makes no prior assumptions about object types or object appearance. Case studies from the CAVIAR dataset show that our method can automatically track multiple targets and quickly finds targets entering or leaving the scene.) <|cite_end|>. On the other hand, deterministic approach such as network flow <|cite_start|> (Reference: {Global data association for multi-object tracking using network flows: We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement.) <|cite_end|>, continuous energy optimisation <|cite_start|> (Reference: {Continuous Energy Minimization for Multitarget Tracking: Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets.) <|cite_end|>, has become a popular method for multi-object tracking problem in visual tracking application. This approach is known to be free from tuning parameters, however, it is useful only when reliable object detection is available. \newline
\indent Unknown observation model parameters (i.e., clutter rate, detection profile) in online multi-object filtering was recently formulated in a joint estimation framework using random finite set (RFS) approach, <|cite_start|> (Reference: Phd filters of higher order in target number: The multitarget recursive Bayes nonlinear filter is the theoretically optimal approach to multisensor-multitarget detection, tracking, and identification. For applications in which this filter is appropriate, it is likely to be tractable for only a small number of targets. In earlier papers we derived closed-form equations for an approximation of this filter based on propagation of a first-order multitarget moment called the probability hypothesis density (PHD). In a recent paper, Erdinc, Willett, and Bar-Shalom argued for the need for a PHD-type filter which remains first-order in the states of individual targets, but which is higher-order in target number. In this paper we show that this is indeed possible. We derive a closed-form cardinalized PHD (CPHD) filter, which propagates not only the PHD but also the entire probability distribution on target number.) <|cite_end|>. Recently, Mahler showed that clever use of
the CPHD filter can accommodate unknown clutter rate and detection profile.
In it was demonstrated that by bootstrapping clutter
estimator from to the Gaussian mixture CPHD filter <|cite_start|> (Reference: {Analytic implementations of the cardinalized probability hypothesis density filter: The probability hypothesis density (PHD) recursion propagates the posterior intensity of the random finite set (RFS) of targets in time. The cardinalized PHD (CPHD) recursion is a generalization of the PHD recursion, which jointly propagates the posterior intensity and the posterior cardinality distribution. In general, the CPHD recursion is computationally intractable. This paper proposes a closed-form solution to the CPHD recursion under linear Gaussian assumptions on the target dynamics and birth process. Based on this solution, an effective multitarget tracking algorithm is developed. Extensions of the proposed closed-form recursion to accommodate nonlinear models are also given using linearization and unscented transform techniques. The proposed CPHD implementations not only sidestep the need to perform data association found in traditional methods, but also dramatically improve the accuracy of individual state estimates as well as the variance of the estimated number of targets when compared to the standard PHD filter. Our implementations only have a cubic complexity, but simulations suggest favorable performance compared to the standard Joint Probabilistic Data Association (JPDA) filter which has a nonpolynomial complexity.) <|cite_end|>, performed very close to the case with known clutter parameter
can be achieved. <|cite_start|> (Reference: Robust multi-Bernoulli filtering: In Bayesian multi-target filtering knowledge of parameters such as clutter intensity and detection probability profile are of critical importance. Significant mismatches in clutter and detection model parameters results in biased estimates. In this paper we propose a multi-target filtering solution that can accommodate non-linear target models and an unknown non-homogeneous clutter and detection profile. Our solution is based on the multi-target multi-Bernoulli filter that adaptively learns non-homogeneous clutter intensity and detection probability while filtering.) <|cite_end|> extended it to multi-Bernoulli filter
with SMC implementation. The multi-Bernoulli filter was used for visual multi-object tracking in <|cite_start|> (Reference: Robust Multi-Bernoulli Filtering for Visual Tracking: To achieve reliable multi-object filtering in vision application, it is of great importance to determine appropriate model parameters. Parameters such as motion and measurement noise covariance can be chosen based on the image frame rate and the property of the designed detector. However, it is not trivial to obtain the average number of false positive measurements or detection probability due to the arbitrary visual scene characteristics from illumination condition or different fields of view. In this paper, we introduce the recently proposed robust multi-Bernoulli filter to deal with unknown clutter rate and detection profile in visual tracking applications. The robust multi-Bernoulli filter treats false positive responses as a special type of target so that the unknown clutter rate is estimated based on the estimated number of clutter targets. Performance evaluation with real videos demonstrates the effectiveness of the robust multi-Bernoulli filter and comparison results with the standard multi-object tracking algorithm show its reliability.) <|cite_end|>. While the solution for filtering with unknown clutter rate exists, these filters do not provide tracks that identify different objects. In particular, the conference version of this work <|cite_start|> (Reference: Robust Multi-Bernoulli Filtering for Visual Tracking: To achieve reliable multi-object filtering in vision application, it is of great importance to determine appropriate model parameters. Parameters such as motion and measurement noise covariance can be chosen based on the image frame rate and the property of the designed detector. However, it is not trivial to obtain the average number of false positive measurements or detection probability due to the arbitrary visual scene characteristics from illumination condition or different fields of view. In this paper, we introduce the recently proposed robust multi-Bernoulli filter to deal with unknown clutter rate and detection profile in visual tracking applications. The robust multi-Bernoulli filter treats false positive responses as a special type of target so that the unknown clutter rate is estimated based on the estimated number of clutter targets. Performance evaluation with real videos demonstrates the effectiveness of the robust multi-Bernoulli filter and comparison results with the standard multi-object tracking algorithm show its reliability.) <|cite_end|> is seriously extended as a new algorithm that is able to provides track identities with completely new structure and evaluated using challenging pedestrian tracking and cell migration experiments To the best of our knowledge this paper is the first attempt for handling unknown false measurement information in online tracking. The main contribution of this paper is to design a multi-object tracker that also produces trajectories and estimates unknown clutter rate on the fly. <|paper_end|> | [
"<|reference_start|> An algorithm for Tracking Multiple Targets: An algorithm for tracking multiple targets in a cluttered environment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis, using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. This branching technique allows correlation of a measurement with its source based on subsequent, as well as previous, data. To keep the number of hypotheses reasonable, unlikely hypotheses are eliminated and hypotheses with similar target estimates are combined. To minimize computational requirements, the entire set of targets and measurements is divided into clusters that are solved independently. In an illustrative example of aircraft tracking, the algorithm successfully tracks targets over a wide range of conditions. <|reference_end|>",
"<|reference_start|> Online Multiperson Tracking-by-Detection from a Single, Uncalibrated\nCamera: In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness. <|reference_end|>",
"<|reference_start|> Shape-based online multitarget tracking and detection for targets causing multiple measurements: Variational Bayesian clustering and lossless data association: This paper proposes a novel online two-level multitarget tracking and detection (MTTD) algorithm. The algorithm focuses on multitarget detection and tracking for the case of multiple measurements per target and for an unknown and varying number of targets. Information is continuously exchanged in both directions between the two levels. Using the high level target position and shape information, the low level clusters the measurements. Furthermore, the low level features automatic relevance detection (ARD), as it automatically determines the optimal number of clusters from the measurements taking into account the expected target shapes. The high level's data association allows for a varying number of targets. A joint probabilistic data association algorithm looks for associations between clusters of measurements and targets. These associations are used to update the target trackers and the target shapes with the individual measurements. No information is lost in the two-level approach since the measurement information is not summarized into features. The target trackers are based on an underlying motion model, while the high level is supplemented with a filter estimating the number of targets. The algorithm is verified using both simulations and experiments using two sensor modalities, video and laser scanner, for detection and tracking of people and ants. <|reference_end|>",
"<|reference_start|> {Global data association for multi-object tracking using network flows: We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement. <|reference_end|>"
] | [
3,
4,
5,
7
] | {"<|cite_1|>": "ss-1214416", "<|cite_2|>": "ss-1273389", "<|cite_4|>": "ss-1273389", "<|cite_5|>": "ss-1159200", "<|cite_6|>": "ss-1450783", "<|cite_7|>": "ss-2000680", "<|cite_8|>": "ss-997915", "<|cite_9|>": "ss-890789", "<|cite_10|>": "ss-1064828", "<|cite_12|>": "ss-1826423", "<|cite_16|>": "ss-840334", "<|cite_17|>": "ss-997913", "<|cite_18|>": "ss-2000681", "<|cite_19|>": "ss-2000681"} |
Subsets and Splits