title
stringlengths
18
162
url
stringlengths
42
44
detail_url
stringlengths
42
44
authors
stringlengths
10
429
tags
stringclasses
3 values
abstract
stringlengths
400
2.37k
pdf
stringlengths
71
71
Red PANDA: Disambiguating Image Anomaly Detection by Removing Nuisance Factors
https://openreview.net/forum?id=z37tDDHHgi
https://openreview.net/forum?id=z37tDDHHgi
Niv Cohen,Jonathan Kahana,Yedid Hoshen
ICLR 2023,Poster
Anomaly detection methods strive to discover patterns that differ from the norm in a meaningful way. This goal is ambiguous as different human operators may find different attributes meaningful. An image differing from the norm by an attribute such as pose may be considered anomalous by some operators while others may consider the attribute irrelevant. Breaking from previous research, we present a new anomaly detection method that allows operators to exclude an attribute when detecting anomalies. Our approach aims to learn representations which do not contain information regarding such nuisance attributes. Anomaly scoring is performed using a density-based approach. Importantly, our approach does not require specifying the attributes where anomalies could appear, which is typically impossible in anomaly detection, but only attributes to ignore. An empirical investigation is presented verifying the effectiveness of our approach.
https://openreview.net/pdf/36721fffb6a7cf770f5f686f7eda3b23393106ea.pdf
Is Attention All That NeRF Needs?
https://openreview.net/forum?id=xE-LtsE-xx
https://openreview.net/forum?id=xE-LtsE-xx
Mukund Varma T,Peihao Wang,Xuxi Chen,Tianlong Chen,Subhashini Venugopalan,Zhangyang Wang
ICLR 2023,Poster
We present Generalizable NeRF Transformer (GNT), a transformer-based architecture that reconstructs Neural Radiance Fields (NeRFs) and learns to render novel views on the fly from source views. While prior works on NeRFs optimize a scene representation by inverting a handcrafted rendering equation, GNT achieves neural representation and rendering that generalizes across scenes using transformers at two stages. (1) The view transformer leverages multi-view geometry as an inductive bias for attention-based scene representation, and predicts coordinate-aligned features by aggregating information from epipolar lines on the neighboring views. (2) The ray transformer renders novel views using attention to decode the features from the view transformer along the sampled points during ray marching. Our experiments demonstrate that when optimized on a single scene, GNT can successfully reconstruct NeRF without an explicit rendering formula due to the learned ray renderer. When trained on multiple scenes, GNT consistently achieves state-of-the-art performance when transferring to unseen scenes and outperform all other methods by ~10% on average. Our analysis of the learned attention maps to infer depth and occlusion indicate that attention enables learning a physically-grounded rendering. Our results show the promise of transformers as a universal modeling tool for graphics. Please refer to our project page for video results: https://vita-group.github.io/GNT/
https://openreview.net/pdf/d875ba2409ec78faf50cee666b3866b2b99b54f8.pdf
Stochastic No-regret Learning for General Games with Variance Reduction
https://openreview.net/forum?id=oJZ8bPtCar
https://openreview.net/forum?id=oJZ8bPtCar
Yichi Zhou,Fang Kong,Shuai Li
ICLR 2023,Poster
We show that a stochastic version of optimistic mirror descent (OMD), a variant of mirror descent with recency bias, converges fast in general games. More specifically, with our algorithm, the individual regret of each player vanishes at a speed of $O(1/T^{3/4})$ and the sum of all players' regret vanishes at a speed of $O(1/T)$, which is an improvement upon the $O(1/\sqrt{T})$ convergence rate of prior stochastic algorithms, where $T$ is the number of interaction rounds. Due to the advantage of stochastic methods in the computational cost, we significantly improve the time complexity over the deterministic algorithms to approximate coarse correlated equilibrium. To achieve lower time complexity, we equip the stochastic version of OMD in \cite{alacaoglu2021stochastic} with a novel low-variance Monte-Carlo estimator. Our algorithm extends previous works \cite{alacaoglu2021stochastic,carmon2019variance} from two-player zero-sum games to general games.
https://openreview.net/pdf/80ca6719730a6be14850965c99cc1783ac104905.pdf
The Dark Side of AutoML: Towards Architectural Backdoor Search
https://openreview.net/forum?id=bsZULlDGXe
https://openreview.net/forum?id=bsZULlDGXe
Ren Pang,Changjiang Li,Zhaohan Xi,Shouling Ji,Ting Wang
ICLR 2023,Poster
This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks? Specifically, we present EVAS, a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers. Compared with existing attacks, EVAS demonstrates many interesting properties: (i) it does not require polluting training data or perturbing model parameters; (ii) it is agnostic to downstream fine-tuning or even re-training from scratch; (iii) it naturally evades defenses that rely on inspecting model parameters or training data. With extensive evaluation on benchmark datasets, we show that EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary's design spectrum. We further characterize the mechanisms underlying EVAS, which are possibly explainable by architecture-level ``shortcuts'' that recognize trigger patterns. This work showcases that NAS can be exploited in a harmful way to find architectures with inherent backdoor vulnerability. The code is available at https://github.com/ain-soph/nas_backdoor.
https://openreview.net/pdf/9b89e3f420dd473917d9c33741ea888a54ecb1b3.pdf
Generalization and Estimation Error Bounds for Model-based Neural Networks
https://openreview.net/forum?id=9F_xlC7sk9
https://openreview.net/forum?id=9F_xlC7sk9
Avner Shultzman,Eyar Azar,Miguel R. D. Rodrigues,Yonina C. Eldar
ICLR 2023,Poster
Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.
https://openreview.net/pdf/f2ace92cfe9ac4e2cc157c4946f019ecefe04c91.pdf
ChordMixer: A Scalable Neural Attention Model for Sequences with Different Length
https://openreview.net/forum?id=E8mzu3JbdR
https://openreview.net/forum?id=E8mzu3JbdR
Ruslan Khalitov,Tong Yu,Lei Cheng,Zhirong Yang
ICLR 2023,Poster
Sequential data naturally have different lengths in many domains, with some very long sequences. As an important modeling tool, neural attention should capture long-range interaction in such sequences. However, most existing neural attention models admit only short sequences, or they have to employ chunking or padding to enforce a constant input length. Here we propose a simple neural network building block called ChordMixer which can model the attention for long sequences with variable lengths. Each ChordMixer block consists of a position-wise rotation layer without learnable parameters and an element-wise MLP layer. Repeatedly applying such blocks forms an effective network backbone that mixes the input signals towards the learning targets. We have tested ChordMixer on the synthetic adding problem, long document classification, and DNA sequence-based taxonomy classification. The experiment results show that our method substantially outperforms other neural attention models.
https://openreview.net/pdf/dbeed2c3d0b79691b83802ee788e14ea278798b1.pdf
Boosting Adversarial Transferability using Dynamic Cues
https://openreview.net/forum?id=SZynfVLGd5
https://openreview.net/forum?id=SZynfVLGd5
Muzammal Naseer,Ahmad Mahmood,Salman Khan,Fahad Khan
ICLR 2023,Poster
The transferability of adversarial perturbations between image models has been extensively studied. In this case, an attack is generated from a known surrogate \eg, the ImageNet trained model, and transferred to change the decision of an unknown (black-box) model trained on an image dataset. However, attacks generated from image models do not capture the dynamic nature of a moving object or a changing scene due to a lack of temporal cues within image models. This leads to reduced transferability of adversarial attacks from representation-enriched \emph{image} models such as Supervised Vision Transformers (ViTs), Self-supervised ViTs (\eg, DINO), and Vision-language models (\eg, CLIP) to black-box \emph{video} models. In this work, we induce dynamic cues within the image models without sacrificing their original performance on images. To this end, we optimize \emph{temporal prompts} through frozen image models to capture motion dynamics. Our temporal prompts are the result of a learnable transformation that allows optimizing for temporal gradients during an adversarial attack to fool the motion dynamics. Specifically, we introduce spatial (image) and temporal (video) cues within the same source model through task-specific prompts. Attacking such prompts maximizes the adversarial transferability from image-to-video and image-to-image models using the attacks designed for image models. As an example, an iterative attack launched from image model Deit-B with temporal prompts reduces generalization (top1 \% accuracy) of a video model by 35\% on Kinetics-400. Our approach also improves adversarial transferability to image models by 9\% on ImageNet w.r.t the current state-of-the-art approach. Our attack results indicate that the attacker does not need specialized architectures, \eg, divided space-time attention, 3D convolutions, or multi-view convolution networks for different data modalities. Image models are effective surrogates to optimize an adversarial attack to fool black-box models in a changing environment over time. Code is available at \url{https://bit.ly/3Xd9gRQ}
https://openreview.net/pdf/9e990c20252d6a4dcc08a88751f2f07536fc4f76.pdf
Static Prediction of Runtime Errors by Learning to Execute Programs with External Resource Descriptions
https://openreview.net/forum?id=lLp-C5nTdJG
https://openreview.net/forum?id=lLp-C5nTdJG
David Bieber,Rishab Goel,Dan Zheng,Hugo Larochelle,Daniel Tarlow
ICLR 2023,Poster
The execution behavior of a program often depends on external resources, such as program inputs or file contents, and so the program cannot be run in isolation. Nevertheless, software developers benefit from fast iteration loops where automated tools identify errors as early as possible, even before programs can be compiled and run. This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible? Here, we introduce a competitive programming dataset and task for predicting runtime errors, which we show is difficult for generic models like Transformers. We approach this task by developing an interpreter-inspired architecture with an inductive bias towards mimicking program executions, which models exception handling and "learns to execute" descriptions of external resources. Surprisingly, we show that the model can also predict the locations of errors, despite being trained only on labels indicating error presence or absence and kind. In total, we present a practical and difficult-yet-approachable challenge problem related to learning program execution behavior and we demonstrate promising new capabilities of interpreter-inspired machine learning models for code.
https://openreview.net/pdf/a9d186e4ee1859097ef388e218639a4a12bee126.pdf
Matching receptor to odorant with protein language and graph neural networks
https://openreview.net/forum?id=q9VherQJd8_
https://openreview.net/forum?id=q9VherQJd8_
Matej Hladiš,Maxence Lalis,Sebastien Fiorucci,Jérémie Topin
ICLR 2023,Poster
Odor perception in mammals is triggered by interactions between volatile organic compounds and a subset of hundreds of proteins called olfactory receptors (ORs). Molecules activate these receptors in a complex combinatorial coding allowing mammals to discriminate a vast number of chemical stimuli. Recently, ORs have gained attention as new therapeutic targets following the discovery of their involvement in other physiological processes and diseases. To date, predicting molecule-induced activation for ORs is highly challenging since $43\%$ of ORs have no identified active compound. In this work, we combine [CLS] token from protBERT with a molecular graph and propose a tailored GNN architecture incorporating inductive biases from the protein-molecule binding. We abstract the biological process of protein-molecule activation as the injection of a molecule into a protein-specific environment. On a newly gathered dataset of $46$ $700$ OR-molecule pairs, this model outperforms state-of-the-art models on drug-target interaction prediction as well as standard GNN baselines. Moreover, by incorporating non-bonded interactions the model is able to work with mixtures of compounds. Finally, our predictions reveal a similar activation pattern for molecules within a given odor family, which is in agreement with the theory of combinatorial coding in olfaction.
https://openreview.net/pdf/0c3c2cbf58174d89f67e7d6f33a3653d69fd94fb.pdf
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization
https://openreview.net/forum?id=6xXtM8bFFJ
https://openreview.net/forum?id=6xXtM8bFFJ
Hanseul Cho,Chulhee Yun
ICLR 2023,Poster
Stochastic gradient descent-ascent (SGDA) is one of the main workhorses for solving finite-sum minimax optimization problems. Most practical implementations of SGDA randomly reshuffle components and sequentially use them (i.e., without-replacement sampling); however, there are few theoretical results on this approach for minimax algorithms, especially outside the easier-to-analyze (strongly-)monotone setups. To narrow this gap, we study the convergence bounds of SGDA with random reshuffling (SGDA-RR) for smooth nonconvex-nonconcave objectives with Polyak-{\L}ojasiewicz (P{\L}) geometry. We analyze both simultaneous and alternating SGDA-RR for nonconvex-P{\L} and primal-P{\L}-P{\L} objectives, and obtain convergence rates faster than with-replacement SGDA. Our rates extend to mini-batch SGDA-RR, recovering known rates for full-batch gradient descent-ascent (GDA). Lastly, we present a comprehensive lower bound for GDA with an arbitrary step-size ratio, which matches the full-batch upper bound for the primal-P{\L}-P{\L} case.
https://openreview.net/pdf/013c163efccf9ed9afaacb52b3621cb26ceaeac3.pdf
MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
https://openreview.net/forum?id=H0HGljkxQFN
https://openreview.net/forum?id=H0HGljkxQFN
Chenglin Yang,Siyuan Qiao,Qihang Yu,Xiaoding Yuan,Yukun Zhu,Alan Yuille,Hartwig Adam,Liang-Chieh Chen
ICLR 2023,Poster
This paper presents MOAT, a family of neural networks that build on top of MObile convolution (i.e., inverted residual blocks) and ATtention. Unlike the current works that stack separate mobile convolution and transformer blocks, we effectively merge them into a MOAT block. Starting with a standard Transformer block, we replace its multi-layer perceptron with a mobile convolution block, and further reorder it before the self-attention operation. The mobile convolution block not only enhances the network representation capacity, but also produces better downsampled features. Our conceptually simple MOAT networks are surprisingly effective, achieving 89.1% / 81.5% top-1 accuracy on ImageNet-1K / ImageNet-1K-V2 with ImageNet-22K pretraining. Additionally, MOAT can be seamlessly applied to downstream tasks that require large resolution inputs by simply converting the global attention to window attention. Thanks to the mobile convolution that effectively exchanges local information between pixels (and thus cross-windows), MOAT does not need the extra window-shifting mechanism. As a result, on COCO object detection, MOAT achieves 59.2% AP$^{\text{box}}$ with 227M model parameters (single-scale inference, and hard NMS), and on ADE20K semantic segmentation, MOAT attains 57.6% mIoU with 496M model parameters (single-scale inference). Finally, the tiny-MOAT family, obtained by simply reducing the channel sizes, also surprisingly outperforms several mobile-specific transformer-based models on ImageNet. The tiny-MOAT family is also benchmarked on downstream tasks, serving as a baseline for the community. We hope our simple yet effective MOAT will inspire more seamless integration of convolution and self-attention. Code is publicly available.
https://openreview.net/pdf/f3c77b5e165318fabc4aa403f0aebf591ad8043e.pdf
Part-Based Models Improve Adversarial Robustness
https://openreview.net/forum?id=bAMTaeqluh4
https://openreview.net/forum?id=bAMTaeqluh4
Chawin Sitawarin,Kornrapat Pongmala,Yizheng Chen,Nicholas Carlini,David Wagner
ICLR 2023,Poster
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks by introducing a part-based model for object classification. We believe that the richer form of annotation helps guide neural networks to learn more robust features without requiring more samples or larger models. Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts and then classify the segmented object. Empirically, our part-based models achieve both higher accuracy and higher adversarial robustness than a ResNet-50 baseline on all three datasets. For instance, the clean accuracy of our part models is up to 15 percentage points higher than the baseline's, given the same level of robustness. Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations. The code is publicly available at https://github.com/chawins/adv-part-model.
https://openreview.net/pdf/06ab733acd67575bf3271396320a8cc4f453c6b8.pdf
PGrad: Learning Principal Gradients For Domain Generalization
https://openreview.net/forum?id=CgCmwcfgEdH
https://openreview.net/forum?id=CgCmwcfgEdH
Zhe Wang,Jake Grigsby,Yanjun Qi
ICLR 2023,Poster
Machine learning models fail to perform when facing out-of-distribution (OOD) domains, a challenging task known as domain generalization (DG). In this work, we develop a novel DG training strategy, we call PGrad, to learn a robust gradient direction, improving models' generalization ability on unseen domains. The proposed gradient aggregates the principal directions of a sampled roll-out optimization trajectory that measures the training dynamics across all training domains. PGrad gradient design forces the DG training to ignore domain-dependent noise signals and updates all training domains with a robust direction covering main components of parameter dynamics. We further improve PGrad via bijection-based computational refinement and directional plus length-based calibrations. Our theoretical proof connects PGrad to the spectral analysis of Hessian in training neural networks. Experiments on DomainBed and WILDS benchmarks demonstrate that our approach effectively enables robust DG optimization and leads to smoothly decreased loss curves. Empirically, PGrad achieves competitive results across seven datasets, demonstrating its efficacy across both synthetic and real-world distributional shifts.
https://openreview.net/pdf/3624aed42e1bd899a25cf9a4c59dd981a3281799.pdf
Extremely Simple Activation Shaping for Out-of-Distribution Detection
https://openreview.net/forum?id=ndYXTEL6cZz
https://openreview.net/forum?id=ndYXTEL6cZz
Andrija Djurisic,Nebojsa Bozanic,Arjun Ashok,Rosanne Liu
ICLR 2023,Poster
The separation between training and deployment of machine learning models implies that not all scenarios encountered in deployment can be anticipated during training, and therefore relying solely on advancements in training has its limits. Out-of-distribution (OOD) detection is an important area that stress-tests a model’s ability to handle unseen situations: Do models know when they don’t know? Existing OOD detection methods either incur extra training steps, additional data or make nontrivial modifications to the trained network. In contrast, in this work, we propose an extremely simple, post-hoc, on-the-fly activation shaping method, ASH, where a large portion (e.g. 90%) of a sample’s activation at a late layer is removed, and the rest (e.g. 10%) simplified or lightly adjusted. The shaping is applied at inference time, and does not require any statistics calculated from training data. Experiments show that such a simple treatment enhances in-distribution and out- of-distribution sample distinction so as to allow state-of-the-art OOD detection on ImageNet, and does not noticeably deteriorate the in-distribution accuracy. Video, animation and code can be found at: https://andrijazz.github.io/ash.
https://openreview.net/pdf/c54d38c0073dfa4bf694d5a8084860f468aa3e4b.pdf
Statistical Guarantees for Consensus Clustering
https://openreview.net/forum?id=kQxry8Z6Fd9
https://openreview.net/forum?id=kQxry8Z6Fd9
Zhixin Zhou,Gautam Dudeja,Arash A Amini
ICLR 2023,Poster
Consider the problem of clustering $n$ objects. One can apply multiple algorithms to produce $N$ potentially different clustersings of the same objects, that is, partitions of the $n$ objects into $K$ groups. Even a single randomized algorithm can output different clusterings. This often happens when one samples from the posterior of a Bayesian model, or runs multiple MCMC chains from random initializations. A natural task is then to form a consensus among these different clusterings. The challenge in an unsupervised setting is that the optimal matching between clusters of different inputs is unknown. We model this problem as finding a barycenter (also known as Fr\'{e}chet mean) relative to the misclassification rate. We show that by lifting the problem to the space of association matrices, one can derive aggregation algorithms that circumvent the knowledge of the optimal matchings. We analyze the statistical performance of aggregation algorithms under a stochastic label perturbation model, and show that a $K$-means type algorithm followed by a local refinement step can achieve near optimal performance, with a rate that decays exponentially fast in $N$. Numerical experiments show the effectiveness of the proposed methods.
https://openreview.net/pdf/a84a06f19515683c27d572d9f87d434a39877d79.pdf
Expressive Monotonic Neural Networks
https://openreview.net/forum?id=w2P7fMy_RH
https://openreview.net/forum?id=w2P7fMy_RH
Niklas Nolte,Ouail Kitouni,Mike Williams
ICLR 2023,Poster
The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior. This is especially important for interpretability and fairness considerations. In a broader context, scenarios in which monotonicity is important can be found in finance, medicine, physics, and other disciplines. It is thus desirable to build neural network architectures that implement this inductive bias provably. In this work, we propose a weight-constrained architecture with a single residual connection to achieve exact monotonic dependence in any subset of the inputs. The weight constraint scheme directly controls the Lipschitz constant of the neural network and thus provides the additional benefit of robustness. Compared to currently existing techniques used for monotonicity, our method is simpler in implementation and in theory foundations, has negligible computational overhead, is guaranteed to produce monotonic dependence, and is highly expressive. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider.
https://openreview.net/pdf/c07ced415f3dc6edce42ca0e6a88177f4c7b9b04.pdf
Active Image Indexing
https://openreview.net/forum?id=K9RHxPpjn2
https://openreview.net/forum?id=K9RHxPpjn2
Pierre Fernandez,Matthijs Douze,Herve Jegou,Teddy Furon
ICLR 2023,Poster
Image copy detection and retrieval from large databases leverage two components. First, a neural network maps an image to a vector representation, that is relatively robust to various transformations of the image. Second, an efficient but approximate similarity search algorithm trades scalability (size and speed) against quality of the search, thereby introducing a source of error. This paper improves the robustness of image copy detection with active indexing, that optimizes the interplay of these two components. We reduce the quantization loss of a given image representation by making imperceptible changes to the image before its release. The loss is back-propagated through the deep neural network back to the image, under perceptual constraints. These modifications make the image more retrievable. Our experiments show that the retrieval and copy detection of activated images is significantly improved. For instance, activation improves by $+40\%$ the Recall1@1 on various image transformations, and for several popular indexing structures based on product quantization and locality sensitivity hashing.
https://openreview.net/pdf/b41256b2ee29f6e5ec3e51cca015a39d47426eab.pdf
Learning Simultaneous Navigation and Construction in Grid Worlds
https://openreview.net/forum?id=NEtep2C7yD
https://openreview.net/forum?id=NEtep2C7yD
Wenyu Han,Haoran Wu,Eisuke Hirota,Alexander Gao,Lerrel Pinto,Ludovic Righetti,Chen Feng
ICLR 2023,Poster
We propose to study a new learning task, mobile construction, to enable an agent to build designed structures in 1/2/3D grid worlds while navigating in the same evolving environments. Unlike existing robot learning tasks such as visual navigation and object manipulation, this task is challenging because of the interdependence between accurate localization and strategic construction planning. In pursuit of generic and adaptive solutions to this partially observable Markov decision process (POMDP) based on deep reinforcement learning (RL), we design a Deep Recurrent Q-Network (DRQN) with explicit recurrent position estimation in this dynamic grid world. Our extensive experiments show that pre-training this position estimation module before Q-learning can significantly improve the construction performance measured by the intersection-over-union score, achieving the best results in our benchmark of various baselines including model-free and model-based RL, a handcrafted SLAM-based policy, and human players. Our code is available at: https://ai4ce.github.io/SNAC/.
https://openreview.net/pdf/f0b59287a120fe72e3e962c0dd749985ff02ee71.pdf
Learning to CROSS exchange to solve min-max vehicle routing problems
https://openreview.net/forum?id=ZcnzsHC10Y
https://openreview.net/forum?id=ZcnzsHC10Y
Minjun Kim,Junyoung Park,Jinkyoo Park
ICLR 2023,Poster
CROSS exchange (CE), a meta-heuristic that solves various vehicle routing problems (VRPs), improves the solutions of VRPs by swapping the sub-tours of the vehicles. Inspired by CE, we propose Neuro CE (NCE), a fundamental operator of \textit{learned} meta-heuristic, to solve various min-max VRPs while overcoming the limitations of CE, i.e., the expensive $\mathcal{O}(n^4)$ search cost. NCE employs graph neural network to predict the cost-decrements (i.e., results of CE searches) and utilizes the predicted cost-decrements to guide the selection of sub-tours for swapping, while reducing the search cost to $\mathcal{O}(n^2)$. As the learning objective of NCE is to predict the cost-decrement, the training can be simply done in a supervised fashion, whose training samples can be easily collected. Despite the simplicity of NCE, numerical results show that the NCE trained with min-max flexible multi-depot VRP (min-max FMDVRP) outperforms the meta-heuristic baselines. More importantly, it significantly outperforms the neural baselines when solving distinctive special cases of min-max FMDVRP (e.g., min-max MDVRP, min-max mTSP, min-max CVRP) without additional training.
https://openreview.net/pdf/f6186924d42b52101410a5f83ca38d9887b561a2.pdf
PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs
https://openreview.net/forum?id=iUdSB2kK9GY
https://openreview.net/forum?id=iUdSB2kK9GY
James Oldfield,Christos Tzelepis,Yannis Panagakis,Mihalis Nicolaou,Ioannis Patras
ICLR 2023,Poster
Recent advances in the understanding of Generative Adversarial Networks (GANs) have led to remarkable progress in visual editing and synthesis tasks, capitalizing on the rich semantics that are embedded in the latent spaces of pre-trained GANs. However, existing methods are often tailored to specific GAN architectures and are limited to either discovering global semantic directions that do not facilitate localized control, or require some form of supervision through manually provided regions or segmentation masks. In this light, we present an architecture-agnostic approach that jointly discovers factors representing spatial parts and their appearances in an entirely unsupervised fashion. These factors are obtained by applying a semi-nonnegative tensor factorization on the feature maps, which in turn enables context-aware local image editing with pixel-level control. In addition, we show that the discovered appearance factors correspond to saliency maps that localize concepts of interest, without using any labels. Experiments on a wide range of GAN architectures and datasets show that, in comparison to the state of the art, our method is far more efficient in terms of training time and, most importantly, provides much more accurate localized control. Our code is available at: https://github.com/james-oldfield/PandA.
https://openreview.net/pdf/1e15e7a7290cdf00f79a4bd1f9f99eeb32082bf5.pdf
Compositional Law Parsing with Latent Random Functions
https://openreview.net/forum?id=PEuxUXIMLlA
https://openreview.net/forum?id=PEuxUXIMLlA
Fan Shi,Bin Li,Xiangyang Xue
ICLR 2023,Poster
Human cognition has compositionality. We understand a scene by decomposing the scene into different concepts (e.g., shape and position of an object) and learning the respective laws of these concepts, which may be either natural (e.g., laws of motion) or man-made (e.g., laws of a game). The automatic parsing of these laws indicates the model's ability to understand the scene, which makes law parsing play a central role in many visual tasks. This paper proposes a deep latent variable model for Compositional LAw Parsing (CLAP), which achieves the human-like compositionality ability through an encoding-decoding architecture to represent concepts in the scene as latent variables. CLAP employs concept-specific latent random functions instantiated with Neural Processes to capture the law of concepts. Our experimental results demonstrate that CLAP outperforms the baseline methods in multiple visual tasks such as intuitive physics, abstract visual reasoning, and scene representation. The law manipulation experiments illustrate CLAP's interpretability by modifying specific latent random functions on samples. For example, CLAP learns the laws of position-changing and appearance constancy from the moving balls in a scene, making it possible to exchange laws between samples or compose existing laws into novel laws.
https://openreview.net/pdf/40faa239ba5304c9cc2669fee09427b5d2347d4c.pdf
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
https://openreview.net/forum?id=NVZvalzCLg
https://openreview.net/forum?id=NVZvalzCLg
Sharath Girish,Kamal Gupta,Saurabh Singh,Abhinav Shrivastava
ICLR 2023,Poster
We introduce LilNetX, an end-to-end trainable technique for neural networks that enables learning models with specified accuracy-rate-computation trade-off. Prior works approach these problems one at a time and often require post-processing or multistage training which become less practical and do not scale very well for large datasets or architectures. Our method constructs a joint training objective that penalizes the self information of network parameters in a latent representation space to encourage small model size while also introducing priors to increase structured sparsity in the parameter space to reduce computation. When compared with existing state-of-the-art model compression methods, we achieve up to 50% smaller model size and 98% model sparsity on ResNet-20 on the CIFAR-10 dataset as well as 37% smaller model size and 71% structured sparsity on ResNet-50 trained on ImageNet while retaining the same accuracy as those methods. We show that the resulting sparsity can improve the inference time of the models by almost 1.8 times the dense ResNet-50 baseline model. Code is available at https://github.com/Sharath-girish/LilNetX.
https://openreview.net/pdf/6044ad9b2563274f0ee65ed95023ef61cb963a47.pdf
Mitigating Dataset Bias by Using Per-Sample Gradient
https://openreview.net/forum?id=7mgUec-7GMv
https://openreview.net/forum?id=7mgUec-7GMv
Sumyeong Ahn,Seongyoon Kim,Se-Young Yun
ICLR 2023,Poster
The performance of deep neural networks is strongly influenced by the training dataset setup. In particular, when attributes with a strong correlation with the target attribute are present, the trained model can provide unintended prejudgments and show significant inference errors (i.e., the dataset bias problem). Various methods have been proposed to mitigate dataset bias, and their emphasis is on weakly correlated samples, called bias-conflicting samples. These methods are based on explicit bias labels provided by humans. However, such methods require human costs. Recently, several studies have sought to reduce human intervention by utilizing the output space values of neural networks, such as feature space, logits, loss, or accuracy. However, these output space values may be insufficient for the model to understand the bias attributes well. In this study, we propose a debiasing algorithm leveraging gradient called Per-sample Gradient-based Debiasing (PGD). PGD is comprised of three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2). Compared with existing baselines for various datasets, the proposed method showed state-of-the-art accuracy for the classification task. Furthermore, we describe theoretical understandings of how PGD can mitigate dataset bias.
https://openreview.net/pdf/2adbf1cd969a4c6d421debbd3759f0e1954be6e2.pdf
Efficient Model Updates for Approximate Unlearning of Graph-Structured Data
https://openreview.net/forum?id=fhcu4FBLciL
https://openreview.net/forum?id=fhcu4FBLciL
Eli Chien,Chao Pan,Olgica Milenkovic
ICLR 2023,Poster
With the adoption of recent laws ensuring the ``right to be forgotten'', the problem of machine unlearning has become of significant importance. This is particularly the case for graph-structured data, and learning tools specialized for such data, including graph neural networks (GNNs). This work introduces the first known approach for \emph{approximate graph unlearning} with provable theoretical guarantees. The challenges in addressing the problem are two-fold. First, there exist multiple different types of unlearning requests that need to be considered, including node feature, edge and node unlearning. Second, to establish provable performance guarantees, one needs to carefully evaluate the process of feature mixing during propagation. We focus on analyzing Simple Graph Convolutions (SGC) and their generalized PageRank (GPR) extensions, thereby laying the theoretical foundations for unlearning GNNs. Empirical evaluations of six benchmark datasets demonstrate excellent performance/complexity/privacy trade-offs of our approach compared to complete retraining and general methods that do not leverage graph information. For example, unlearning $200$ out of $1208$ training nodes of the Cora dataset only leads to a $0.1\%$ loss in test accuracy, but offers a $4$-fold speed-up compared to complete retraining with a $(\epsilon,\delta)=(1,10^{-4})$ ``privacy cost''. We also exhibit a $12\%$ increase in test accuracy for the same dataset when compared to unlearning methods that do not leverage graph information, with comparable time complexity and the same privacy guarantee.
https://openreview.net/pdf/1651053a4b3f9ed11277a7dab44f3ad0d5956b29.pdf
AudioGen: Textually Guided Audio Generation
https://openreview.net/forum?id=CYK7RfcOzQ4
https://openreview.net/forum?id=CYK7RfcOzQ4
Felix Kreuk,Gabriel Synnaeve,Adam Polyak,Uriel Singer,Alexandre Défossez,Jade Copet,Devi Parikh,Yaniv Taigman,Yossi Adi
ICLR 2023,Poster
In this work, we tackle the problem of generating audio samples conditioned on descriptive text captions. We propose AudioGen, an auto-regressive generative model, operating on a learnt discrete audio representation, that generates audio samples conditioned on text inputs. The task of text-to-audio generation poses multiple challenges. Due to the way audio travels through a medium, differentiating ``objects'' can be a difficult task (e.g., separating multiple people simultaneously speaking). This is further complicated by real-world recording conditions (e.g., background noise, reverberation, etc.). Scarce text annotations impose another constraint, limiting the ability to scale models. Finally, modeling high fidelity audio requires one to operate over extremely long sequences. To alleviate the aforementioned challenges we propose an augmentation technique that mixes different audio samples, driving the model to internally learn to separate multiple sources. We curated 10 datasets containing different types of audio and text annotations to handle the scarcity of text-audio data points. For faster inference, we explore the use of multi-stream modeling, allowing the use of shorter sequences while maintaining a similar bitrate and perceptual quality. Finally, we apply classifier-free guidance to improve adherence to text. Comparing to the evaluated baselines, AudioGen outperforms over both objective and subjective metrics. We further conduct an ablation study to gauge the effects of pre-trained text and audio components.
https://openreview.net/pdf/caa20ebdbbf426117f36f9b36b9d80adae172e11.pdf
Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid Learning in RNNs
https://openreview.net/forum?id=2WklawyeI08
https://openreview.net/forum?id=2WklawyeI08
Yu Duan,Zhongfan Jia,Qian Li,Yi Zhong,Kaisheng Ma
ICLR 2023,Poster
Rapidly learning from ongoing experiences and remembering past events with a flexible memory system are two core capacities of biological intelligence. While the underlying neural mechanisms are not fully understood, various evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning. Inspired by these results, we equip Recurrent Neural Networks (RNNs) with plasticity rules to enable them to adapt their parameters according to ongoing experiences. In addition to the traditional local Hebbian plasticity, we propose a global, gradient-based plasticity rule, which allows the model to evolve towards its self-determined target. Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories. In the meantime, these models can cope with many challenging few-shot learning problems. Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping.
https://openreview.net/pdf/a0c8d10b1b415c591443cffbd8a763cfb7f55124.pdf
Towards Minimax Optimal Reward-free Reinforcement Learning in Linear MDPs
https://openreview.net/forum?id=U9HW6vyNClg
https://openreview.net/forum?id=U9HW6vyNClg
Pihe Hu,Yu Chen,Longbo Huang
ICLR 2023,Poster
We study reward-free reinforcement learning with linear function approximation for episodic Markov decision processes (MDPs). In this setting, an agent first interacts with the environment without accessing the reward function in the exploration phase. In the subsequent planning phase, it is given a reward function and asked to output an $\epsilon$-optimal policy. We propose a novel algorithm LSVI-RFE under the linear MDP setting, where the transition probability and reward functions are linear in a feature mapping. We prove an $\widetilde{O}(H^{4} d^{2}/\epsilon^2)$ sample complexity upper bound for LSVI-RFE, where $H$ is the episode length and $d$ is the feature dimension. We also establish a sample complexity lower bound of $\Omega(H^{3} d^{2}/\epsilon^2)$. To the best of our knowledge, LSVI-RFE is the first computationally efficient algorithm that achieves the minimax optimal sample complexity in linear MDP settings up to an $H$ and logarithmic factors. Our LSVI-RFE algorithm is based on a novel variance-aware exploration mechanism to avoid overly-conservative exploration in prior works. Our sharp bound relies on the decoupling of UCB bonuses during two phases, and a Bernstein-type self-normalized bound, which remove the extra dependency of sample complexity on $H$ and $d$, respectively.
https://openreview.net/pdf/98b0ed97b22ff4771f3198ce6446f6efc032fadc.pdf
On the Data-Efficiency with Contrastive Image Transformation in Reinforcement Learning
https://openreview.net/forum?id=-nm-rHXi5ga
https://openreview.net/forum?id=-nm-rHXi5ga
Sicong Liu,Xi Sheryl Zhang,Yushuo Li,Yifan Zhang,Jian Cheng
ICLR 2023,Poster
Data-efficiency has always been an essential issue in pixel-based reinforcement learning (RL). As the agent not only learns decision-making but also meaningful representations from images. The line of reinforcement learning with data augmentation shows significant improvements in sample-efficiency. However, it is challenging to guarantee the optimality invariant transformation, that is, the augmented data are readily recognized as a completely different state by the agent. In the end, we propose a contrastive invariant transformation (CoIT), a simple yet promising learnable data augmentation combined with standard model-free algorithms to improve sample-efficiency. Concretely, the differentiable CoIT leverages original samples with augmented samples and hastens the state encoder for a contrastive invariant embedding. We evaluate our approach on DeepMind Control Suite and Atari100K. Empirical results verify advances using CoIT, enabling it to outperform the new state-of-the-art on various tasks. Source code is available at https://github.com/mooricAnna/CoIT.
https://openreview.net/pdf/42916f46ac828e86998f6f3ae44feae52efdb5ae.pdf
Energy-based Out-of-Distribution Detection for Graph Neural Networks
https://openreview.net/forum?id=zoz7Ze4STUL
https://openreview.net/forum?id=zoz7Ze4STUL
Qitian Wu,Yiting Chen,Chenxiao Yang,Junchi Yan
ICLR 2023,Poster
Representation learning on semi-structured data, e.g., graphs, has become a central problem in deep learning community as relational structures are pervasive in real situations and induce data inter-dependence that hinders trivial adaptation of existing approaches in other domains where the inputs are assumed to be i.i.d. sampled. However, current models in this regime mostly focus on improving testing performance of in-distribution data and largely ignores the potential risk w.r.t. out-of-distribution (OOD) testing samples that may cause negative outcome if the model is overconfident in prediction on them. In this paper, we identify a provably effective OOD discriminator based on an energy function directly extracted from a graph neural network trained with standard supervised classification loss. This paves a way for a simple and efficient OOD detection model for GNN-based semi-supervised learning on graphs, which we call GNN-Safe. It also has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for in-distribution and OOD samples, which, more critically, can be further strengthened by a non-learning-based structured propagation scheme. Extensive experiments over five real-world datasets validate the practical efficacy of the proposed model for detecting various OOD instances that are inter-connected in a graph with up to 17.0% improvement on average AUROC over competitive peer models and without sacrificing in-distribution testing accuracy.
https://openreview.net/pdf/0fa3c84a1dca57056516486971d02b488dd7ee42.pdf
Quasi-optimal Reinforcement Learning with Continuous Actions
https://openreview.net/forum?id=O8Vc52xFSUR
https://openreview.net/forum?id=O8Vc52xFSUR
Yuhan Li,Wenzhuo Zhou,Ruoqing Zhu
ICLR 2023,Poster
Many real-world applications of reinforcement learning (RL) require making decisions in continuous action environments. In particular, determining the optimal dose level plays a vital role in developing medical treatment regimes. One challenge in adapting existing RL algorithms to medical applications, however, is that the popular infinite support stochastic policies, e.g., Gaussian policy, may assign riskily high dosages and harm patients seriously. Hence, it is important to induce a policy class whose support only contains near-optimal actions, and shrink the action-searching area for effectiveness and reliability. To achieve this, we develop a novel quasi-optimal learning algorithm, which can be easily optimized in off-policy settings with guaranteed convergence under general function approximations. Theoretically, we analyze the consistency, sample complexity, adaptability, and convergence of the proposed algorithm. We evaluate our algorithm with comprehensive simulated experiments and a dose suggestion real application to Ohio Type 1 diabetes dataset.
https://openreview.net/pdf/100ce9435fb820f433bb2a423491a43ed4f64862.pdf
Generalization Bounds for Federated Learning: Fast Rates, Unparticipating Clients and Unbounded Losses
https://openreview.net/forum?id=-EHqoysUYLx
https://openreview.net/forum?id=-EHqoysUYLx
Xiaolin Hu,Shaojie Li,Yong Liu
ICLR 2023,Poster
In {federated learning}, the underlying data distributions may be different across clients. This paper provides a theoretical analysis of generalization error of {federated learning}, which captures both heterogeneity and relatedness of the distributions. In particular, we assume that the heterogeneous distributions are sampled from a meta-distribution. In this two-level distribution framework, we characterize the generalization error not only for clients participating in the training but also for unparticipating clients. We first show that the generalization error for unparticipating clients can be bounded by participating generalization error and participating gap caused by clients' sampling. We further establish fast learning bounds of order $\mathcal{O}(\frac{1}{mn} + \frac{1}{m})$ for unparticipating clients, where $m$ is the number of clients and $n$ is the sample size at each client. To our knowledge, the obtained fast bounds are state-of-the-art in the two-level distribution framework. Moreover, previous theoretical results mostly require the loss function to be bounded. We derive convergence bounds of order $\mathcal{O}(\frac{1}{\sqrt{mn}} + \frac{1}{\sqrt{m}})$ under unbounded assumptions, including sub-exponential and sub-Weibull losses.
https://openreview.net/pdf/99d8d947e127360f989b7314c8418212c738e43d.pdf
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
https://openreview.net/forum?id=bXNl-myZkJl
https://openreview.net/forum?id=bXNl-myZkJl
Shiwei Liu,Tianlong Chen,Xiaohan Chen,Xuxi Chen,Qiao Xiao,Boqian Wu,Tommi Kärkkäinen,Mykola Pechenizkiy,Decebal Constantin Mocanu,Zhangyang Wang
ICLR 2023,Poster
Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31x31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31x31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61x61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51x51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO. Codes are available at https://github.com/VITA-Group/SLaK.
https://openreview.net/pdf/5c2a0ddd8a652ab99b14ffd766607bb364e763b9.pdf
Which Layer is Learning Faster? A Systematic Exploration of Layer-wise Convergence Rate for Deep Neural Networks
https://openreview.net/forum?id=wlMDF1jQF86
https://openreview.net/forum?id=wlMDF1jQF86
Yixiong Chen,Alan Yuille,Zongwei Zhou
ICLR 2023,Poster
The deeply hierarchical structures enable deep neural networks (DNNs) to fit extremely complex target functions. However, the complex interaction between layers also makes the learning process of a particular layer poorly understood. This work demonstrates that the shallower layers of DNNs tend to converge faster than the deeper layers. We call this phenomenon Layer Convergence Bias. We also uncover the fundamental reason behind this phenomenon: Flatter local minima of shallower layers make their gradients more stable and predictive, allowing for faster training. Another surprising result is that the shallower layers tend to learn the low-frequency components of the target function, while the deeper layers usually learn the high-frequency components. It is consistent with the recent discovery that DNNs learn lower frequency objects faster.
https://openreview.net/pdf/43d97adaa7715cb5bb90e928f1889374d65228ee.pdf
A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks
https://openreview.net/forum?id=CJd-BtnwtXq
https://openreview.net/forum?id=CJd-BtnwtXq
Xinyi Wu,Zhengdao Chen,William Wei Wang,Ali Jadbabaie
ICLR 2023,Poster
Oversmoothing is a central challenge of building more powerful Graph Neural Networks (GNNs). While previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. Specifically, we distinguish between two different effects when applying graph convolutions—an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. By quantifying these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is $O(\log N/\log (\log N))$ for sufficiently dense graphs with $N$ nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR), or equivalently, the effects of initial residual connections on oversmoothing. Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. Finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice can be magnified by the difficulty of optimizing deep GNN models.
https://openreview.net/pdf/572cf223b8b9b3354dc46410c80596be2504b4d3.pdf
Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting
https://openreview.net/forum?id=sCrnllCtjoE
https://openreview.net/forum?id=sCrnllCtjoE
Mohammad Amin Shabani,Amir H. Abdi,Lili Meng,Tristan Sylvain
ICLR 2023,Poster
The performance of time series forecasting has recently been greatly improved by the introduction of transformers. In this paper, we propose a general multi-scale framework that can be applied to state-of-the-art transformer-based time series forecasting models (FEDformer, Autoformer, etc.). Using iteratively refining a forecasted time series at multiple scales with shared weights, architecture adaptations and a specially-designed normalization scheme, we are able to achieve significant performance improvements with minimal additional computational overhead. Via detailed ablation studies, we demonstrate the effectiveness of our proposed architectural and methodological innovations. Furthermore, our experiments on various public datasets demonstrate that the proposed method outperforms the corresponding baselines. Depending on the choice of transformer architecture, our mutli-scale framework results in mean squared error reductions ranging from 5.5% to 38.5%. Our code is publicly available in https://github.com/BorealisAI/scaleformer.
https://openreview.net/pdf/614af7ce5f2748b019a31c2016c30250b1c97b9f.pdf
Liquid Structural State-Space Models
https://openreview.net/forum?id=g4OTKRKfS7R
https://openreview.net/forum?id=g4OTKRKfS7R
Ramin Hasani,Mathias Lechner,Tsun-Hsuan Wang,Makram Chahine,Alexander Amini,Daniela Rus
ICLR 2023,Poster
A proper parametrization of state transition matrices of linear state-space models (SSMs) followed by standard nonlinearities enables them to efficiently learn representations from sequential data, establishing the state-of-the-art on an extensive series of long-range sequence modeling benchmarks. In this paper, we show that we can improve further when the structured SSM, such as S4, is given by a linear liquid time-constant (LTC) state-space model. LTC neural networks are causal continuous-time neural networks with an input-dependent state transition module, which makes them learn to adapt to incoming inputs at inference. We show that by using a diagonal plus low-rank decomposition of the state transition matrix introduced in S4, and a few simplifications, the LTC-based structured state-space model, dubbed Liquid-S4, improves generalization across sequence modeling tasks with long-term dependencies such as image, text, audio, and medical time-series, with an average performance of 87.32\% on the Long-Range Arena benchmark. On the full raw Speech Command recognition dataset, Liquid-S4 achieves 96.78\% accuracy with a 30\% reduction in parameter counts compared to S4. The additional gain in performance is the direct result of the Liquid-S4's kernel structure that takes into account the similarities of the input sequence samples during training and inference.
https://openreview.net/pdf/6140538cbe3bb75e4eaff5d9cdfc39f21f285619.pdf
Equivariant Hypergraph Diffusion Neural Operators
https://openreview.net/forum?id=RiTjKoscnNd
https://openreview.net/forum?id=RiTjKoscnNd
Peihao Wang,Shenghao Yang,Yunyu Liu,Zhangyang Wang,Pan Li
ICLR 2023,Poster
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide a promising way to model higher-order relations in data and further solve relevant prediction tasks built upon such higher-order relations. However, higher-order relations in practice contain complex patterns and are often highly irregular. So, it is often challenging to design an HNN that suffices to express those relations while keeping computational efficiency. Inspired by hypergraph diffusion algorithms, this work proposes a new HNN architecture named ED-HNN, which provably approximates any continuous equivariant hypergraph diffusion operators that can model a wide range of higher-order relations. ED-HNN can be implemented efficiently by combining star expansions of hypergraphs with standard message passing neural networks. ED-HNN further shows great superiority in processing heterophilic hypergraphs and constructing deep models. We evaluate ED-HNN for node classification on nine real-world hypergraph datasets. ED-HNN uniformly outperforms the best baselines over these nine datasets and achieves more than 2%$\uparrow$ in prediction accuracy over four datasets therein. Our code is available at: https://github.com/Graph-COM/ED-HNN.
https://openreview.net/pdf/df967fe60a20e0e304bd72e03f2bf158f3c30311.pdf
Ollivier-Ricci Curvature for Hypergraphs: A Unified Framework
https://openreview.net/forum?id=sPCKNl5qDps
https://openreview.net/forum?id=sPCKNl5qDps
Corinna Coupette,Sebastian Dalleiger,Bastian Rieck
ICLR 2023,Poster
Bridging geometry and topology, curvature is a powerful and expressive invariant. While the utility of curvature has been theoretically and empirically confirmed in the context of manifolds and graphs, its generalization to the emerging domain of hypergraphs has remained largely unexplored. On graphs, the Ollivier-Ricci curvature measures differences between random walks via Wasserstein distances, thus grounding a geometric concept in ideas from probability theory and optimal transport. We develop Orchid, a flexible framework generalizing Ollivier-Ricci curvature to hypergraphs, and prove that the resulting curvatures have favorable theoretical properties. Through extensive experiments on synthetic and real-world hypergraphs from different domains, we demonstrate that Orchid curvatures are both scalable and useful to perform a variety of hypergraph tasks in practice.
https://openreview.net/pdf/cf1554077cc4c42ade51993934dbe9c8c16d4ef0.pdf
Hard-Meta-Dataset++: Towards Understanding Few-Shot Performance on Difficult Tasks
https://openreview.net/forum?id=wq0luyH3m4
https://openreview.net/forum?id=wq0luyH3m4
Samyadeep Basu,Megan Stanley,John F Bronskill,Soheil Feizi,Daniela Massiceti
ICLR 2023,Poster
Few-shot classification is the ability to adapt to any new classification task from only a few training examples. The performance of current top-performing few-shot classifiers varies widely across different tasks where they often fail on a subset of `difficult' tasks. This phenomenon has real-world consequences for deployed few-shot systems where safety and reliability are paramount, yet little has been done to understand these failure cases. In this paper, we study these difficult tasks to gain a more nuanced understanding of the limitations of current methods. To this end, we develop a general and computationally efficient algorithm called FastDiffSel to extract difficult tasks from any large-scale vision dataset. Notably, our algorithm can extract tasks at least 20x faster than existing methods enabling its use on large-scale datasets. We use FastDiffSel to extract difficult tasks from Meta-Datasset, a widely-used few-shot classification benchmark, and other challenging large-scale vision datasets including ORBIT, CURE-OR and ObjectNet. These tasks are curated into Hard-MD++, a new few-shot testing benchmark to promote the development of methods that are robust to even the most difficult tasks. We use Hard-MD++ to stress-test an extensive suite of few-shot classification methods and show that state-of-the-art approaches fail catastrophically on difficult tasks. We believe that our extraction algorithm FastDiffSel and Hard-MD++ will aid researchers in further understanding failure modes of few-shot classification models.
https://openreview.net/pdf/677bd75f3d760e8f3557883a7a87346e49ab51fe.pdf
Compositional Semantic Parsing with Large Language Models
https://openreview.net/forum?id=gJW8hSGBys8
https://openreview.net/forum?id=gJW8hSGBys8
Andrew Drozdov,Nathanael Schärli,Ekin Akyürek,Nathan Scales,Xinying Song,Xinyun Chen,Olivier Bousquet,Denny Zhou
ICLR 2023,Poster
Humans can reason compositionally when presented with new tasks. Previous research shows that appropriate prompting techniques enable large language models (LLMs) to solve artificial compositional generalization tasks such as SCAN. In this work, we identify additional challenges in more realistic semantic parsing tasks with larger vocabulary and refine these prompting techniques to address them. Our best method is based on least-to-most prompting: it decomposes the problem using prompting-based syntactic parsing, then uses this decomposition to select appropriate exemplars and to sequentially generate the semantic parse. This method allows us to set a new state of the art for CFQ while requiring only 1% of the training data used by traditional approaches. Due to the general nature of our approach, we expect similar efforts will lead to new results in other tasks and domains, especially for knowledge-intensive applications.
https://openreview.net/pdf/668ef1e66f349e87c8948f0e5e5984608ebef31d.pdf
TiAda: A Time-scale Adaptive Algorithm for Nonconvex Minimax Optimization
https://openreview.net/forum?id=zClyiZ5V6sL
https://openreview.net/forum?id=zClyiZ5V6sL
Xiang Li,Junchi YANG,Niao He
ICLR 2023,Poster
Adaptive gradient methods have shown their ability to adjust the stepsizes on the fly in a parameter-agnostic manner, and empirically achieve faster convergence for solving minimization problems. When it comes to nonconvex minimax optimization, however, current convergence analyses of gradient descent ascent (GDA) combined with adaptive stepsizes require careful tuning of hyper-parameters and the knowledge of problem-dependent parameters. Such a discrepancy arises from the primal-dual nature of minimax problems and the necessity of delicate time-scale separation between the primal and dual updates in attaining convergence. In this work, we propose a single-loop adaptive GDA algorithm called TiAda for nonconvex minimax optimization that automatically adapts to the time-scale separation. Our algorithm is fully parameter-agnostic and can achieve near-optimal complexities simultaneously in deterministic and stochastic settings of nonconvex-strongly-concave minimax problems. The effectiveness of the proposed method is further justified numerically for a number of machine learning applications.
https://openreview.net/pdf/407e9df8916359b9f1ee5e38407c78c92748649e.pdf
FaiREE: fair classification with finite-sample and distribution-free guarantee
https://openreview.net/forum?id=shzu8d6_YAR
https://openreview.net/forum?id=shzu8d6_YAR
Puheng Li,James Zou,Linjun Zhang
ICLR 2023,Poster
Algorithmic fairness plays an increasingly critical role in machine learning research. Several group fairness notions and algorithms have been proposed. However, the fairness guarantee of existing fair classification methods mainly depend on specific data distributional assumptions, often requiring large sample sizes, and fairness could be violated when there is a modest number of samples, which is often the case in practice. In this paper, we propose FaiREE, a fair classification algorithm which can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees. FaiREE can be adapted to satisfying various group fairness notions (e.g., Equality of Opportunity, Equalized Odds, Demographic Parity, etc.) and achieve the optimal accuracy. These theoretical guarantees are further supported by experiments on both synthetic and real data. FaiREE is shown to have favorable performance over state-of-the-art algorithms.
https://openreview.net/pdf/0ef42c8eae510f399cb004342de52a2a9b3005e3.pdf
Exponential Generalization Bounds with Near-Optimal Rates for $L_q$-Stable Algorithms
https://openreview.net/forum?id=1_jtWjhSSkr
https://openreview.net/forum?id=1_jtWjhSSkr
Xiaotong Yuan,Ping Li
ICLR 2023,Poster
The \emph{stability} of learning algorithms to changes in the training sample has been actively studied as a powerful proxy for reasoning about generalization. Recently, exponential generalization and excess risk bounds with near-optimal rates have been obtained under the stringent and distribution-free notion of uniform stability~\citep{bousquet2020sharper,klochkov2021stability}. In the meanwhile, under the notion of $L_q$-stability, which is weaker and distribution dependent, exponential generalization bounds are also available yet so far only with sub-optimal rates. Therefore, a fundamental question we would like to address in this paper is whether it is possible to derive near-optimal exponential generalization bounds for $L_q$-stable learning algorithms. As the core contribution of the present work, we give an affirmative answer to this question by developing strict analogues of the near-optimal generalization and risk bounds of uniformly stable algorithms for $L_q$-stable algorithms. Further, we demonstrate the power of our improved $L_q$-stability and generalization theory by applying it to derive strong sparse excess risk bounds, under mild conditions, for computationally tractable sparsity estimation algorithms such as Iterative Hard Thresholding (IHT).
https://openreview.net/pdf/de359ab88345f813772416a324f7c4dc07f7eb40.pdf
Disentangling Learning Representations with Density Estimation
https://openreview.net/forum?id=EMvG1Jdhw_8
https://openreview.net/forum?id=EMvG1Jdhw_8
Eric Yeats,Frank Y Liu,Hai Li
ICLR 2023,Poster
Disentangled learning representations have promising utility in many applications, but they currently suffer from serious reliability issues. We present Gaussian Channel Autoencoder (GCAE), a method which achieves reliable disentanglement via scalable non-parametric density estimation of the latent space. GCAE avoids the curse of dimensionality of density estimation by disentangling subsets of its latent space with the Dual Total Correlation (DTC) metric, thereby representing its high-dimensional latent joint distribution as a collection of many low-dimensional conditional distributions. In our experiments, GCAE achieves highly competitive and reliable disentanglement scores compared with state-of-the-art baselines.
https://openreview.net/pdf/9bb8225a142376adea18d8a72dc2531bc22d89e0.pdf
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
https://openreview.net/forum?id=GVSf7Z7DbYL
https://openreview.net/forum?id=GVSf7Z7DbYL
Manzil Zaheer,Ankit Singh Rawat,Seungyeon Kim,Chong You,Himanshu Jain,Andreas Veit,Rob Fergus,Sanjiv Kumar
ICLR 2023,Poster
The remarkable performance gains realized by large pretrained models, e.g., GPT-3, hinge on the massive amounts of data they are exposed to during training. Analogously, distilling such large models to compact models for efficient deployment also necessitates a large amount of (labeled or unlabeled) training data. In this paper, we propose the teacher-guided training (TGT) framework for training a high-quality compact model that leverages the knowledge acquired by pretrained generative models, while obviating the need to go through a large volume of data. TGT exploits the fact that the teacher has acquired a good representation of the underlying data domain, which typically corresponds to a much lower dimensional manifold than the input space. Furthermore, we can use the teacher to explore input space more efficiently through sampling or gradient-based methods; thus, making TGT especially attractive for limited data or long-tail settings. We formally capture this benefit of proposed data-domain exploration in our generalization bounds. We find that TGT can improve accuracy on several image classification benchmarks as well as a range of text classification and retrieval tasks.
https://openreview.net/pdf/fc2352cc277d4c634bf33e196742780db863eab2.pdf
Neural Agents Struggle to Take Turns in Bidirectional Emergent Communication
https://openreview.net/forum?id=GULFHQfgw0g
https://openreview.net/forum?id=GULFHQfgw0g
Valentin Taillandier,Dieuwke Hupkes,Benoît Sagot,Emmanuel Dupoux,Paul Michel
ICLR 2023,Poster
The spontaneous exchange of turns is a central aspect of human communication. Although turn-taking conventions come to us naturally, artificial dialogue agents struggle to coordinate, and must rely on hard-coded rules to engage in interactive conversations with human interlocutors. In this paper, we investigate the conditions under which artificial agents may naturally develop turn-taking conventions in a simple language game. We describe a cooperative task where success is contingent on the exchange of information along a shared communication channel where talking over each other hinders communication. Despite these environmental constraints, neural-network based agents trained to solve this task with reinforcement learning do not systematically adopt turn-taking conventions. However, we find that agents that do agree on turn-taking protocols end up performing better. Moreover, agents that are forced to perform turn-taking can learn to solve the task more quickly. This suggests that turn-taking may help to generate conversations that are easier for speakers to interpret.
https://openreview.net/pdf/f6e0700b1a4de32f13aabefaa7a865a60b7ce2f2.pdf
Prompting GPT-3 To Be Reliable
https://openreview.net/forum?id=98p5x51L5af
https://openreview.net/forum?id=98p5x51L5af
Chenglei Si,Zhe Gan,Zhengyuan Yang,Shuohang Wang,Jianfeng Wang,Jordan Lee Boyd-Graber,Lijuan Wang
ICLR 2023,Poster
Large language models (LLMs) show impressive abilities via few-shot prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use in real-world language applications. However, the crucial problem of how to improve the reliability of GPT-3 is still under-explored. While reliability is a broad and vaguely defined term, we decompose reliability into four main facets that correspond to the existing framework of ML safety and are well-recognized to be important: generalizability, social biases, calibration, and factuality. Our core contribution is to establish simple and effective prompts that improve GPT-3’s reliability as it: 1) generalizes out-of-distribution, 2) balances demographic distribution and uses natural language instructions to reduce social biases, 3) calibrates output probabilities, and 4) updates the LLM’s factual knowledge and reasoning chains. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets. We release all processed datasets, evaluation scripts, and model predictions. Our systematic empirical study not only sheds new insights on the reliability of prompting LLMs, but more importantly, our prompting strategies can help practitioners more reliably use LLMs like GPT-3.
https://openreview.net/pdf/1545ad3e1d44fe3f8431c30da415e1dd55352da5.pdf
Human alignment of neural network representations
https://openreview.net/forum?id=ReDQ1OUQR0X
https://openreview.net/forum?id=ReDQ1OUQR0X
Lukas Muttenthaler,Jonas Dippel,Lorenz Linhardt,Robert A. Vandermeulen,Simon Kornblith
ICLR 2023,Poster
Today’s computer vision models achieve human or near-human level performance across a wide variety of vision tasks. However, their architectures, data, and learning algorithms differ in numerous ways from those that give rise to human vision. In this paper, we investigate the factors that affect the alignment between the representations learned by neural networks and human mental representations inferred from behavioral responses. We find that model scale and architecture have essentially no effect on the alignment with human behavioral responses, whereas the training dataset and objective function both have a much larger impact. These findings are consistent across three datasets of human similarity judgments collected using two different tasks. Linear transformations of neural network representations learned from behavioral responses from one dataset substantially improve alignment with human similarity judgments on the other two datasets. In addition, we find that some human concepts such as food and animals are well-represented by neural networks whereas others such as royal or sports-related objects are not. Overall, although models trained on larger, more diverse datasets achieve better alignment with humans than models trained on ImageNet alone, our results indicate that scaling alone is unlikely to be sufficient to train neural networks with conceptual representations that match those used by humans.
https://openreview.net/pdf/737f76b5d5f5d7cf1679f111fdaa8952894de1e7.pdf
Unbiased Stochastic Proximal Solver for Graph Neural Networks with Equilibrium States
https://openreview.net/forum?id=j3cUWIMsFBN
https://openreview.net/forum?id=j3cUWIMsFBN
Mingjie Li,Yifei Wang,Yisen Wang,Zhouchen Lin
ICLR 2023,Poster
Graph Neural Networks (GNNs) are widely used deep learning models that can extract meaningful representations from graph datasets and achieve great success in many machine learning tasks. Among them, graph neural networks with iterative iterations like unfolded GNNs and implicit GNNs can effectively capture long-range dependencies in graphs and demonstrate superior performance on large graphs since they can mathematically ensure its convergence to some nontrivial solution after lots of aggregations. However, the aggregation time for such models costs a lot as they need to aggregate the full graph in each update. Such weakness limits the scalability of the implicit graph models. To tackle such limitations, we propose two unbiased stochastic proximal solvers inspired by the stochastic proximal gradient descent method and its variance reduction variant called USP and USP-VR solvers. From the point of stochastic optimization, we theoretically prove that our solvers are unbiased, which can converge to the same solution as the original solvers for unfolded GNNs and implicit GNNs. Furthermore, the computation complexities for unfolded GNNs and implicit GNNs with our proposed solvers are significantly less than their vanilla versions. Experiments on various large graph datasets show that our proposed solvers are more efficient and can achieve state-of-the-art performance.
https://openreview.net/pdf/6083f5ed4faea8706b60972e9d73de80a5820414.pdf
DiGress: Discrete Denoising diffusion for graph generation
https://openreview.net/forum?id=UaAD-Nu86WX
https://openreview.net/forum?id=UaAD-Nu86WX
Clement Vignac,Igor Krawczuk,Antoine Siraudin,Bohan Wang,Volkan Cevher,Pascal Frossard
ICLR 2023,Poster
This work introduces DiGress, a discrete denoising diffusion model for generating graphs with categorical node and edge attributes. Our model utilizes a discrete diffusion process that progressively edits graphs with noise, through the process of adding or removing edges and changing the categories. A graph transformer network is trained to revert this process, simplifying the problem of distribution learning over graphs into a sequence of node and edge classification tasks. We further improve sample quality by introducing a Markovian noise model that preserves the marginal distribution of node and edge types during diffusion, and by incorporating auxiliary graph-theoretic features. A procedure for conditioning the generation on graph-level features is also proposed. DiGress achieves state-of-the-art performance on molecular and non-molecular datasets, with up to 3x validity improvement on a planar graph dataset. It is also the first model to scale to the large GuacaMol dataset containing 1.3M drug-like molecules without the use of molecule-specific representations.
https://openreview.net/pdf/ca5c988dd881d65b94fffbf35e0eaa5c5be23585.pdf
How to prepare your task head for finetuning
https://openreview.net/forum?id=gVOXZproe-e
https://openreview.net/forum?id=gVOXZproe-e
Yi Ren,Shangmin Guo,Wonho Bae,Danica J. Sutherland
ICLR 2023,Poster
In the era of deep learning, transferring information from a pretrained network to a downstream task by finetuning has many benefits. The choice of task head plays an important role in fine-tuning, as the pretrained and downstream tasks are usually different. Although there exist many different designs for finetuning, a full understanding of when and why these algorithms work has been elusive. We analyze how the choice of task head controls feature adaptation and hence influences the downstream performance. By decomposing the feature's learning dynamics, we find the key aspect is the training accuracy and loss at the beginning of finetuning, which determines the "energy" available for the feature's adaptation. We identify a significant trend in the effect of changes in this initial energy on the resulting features after finetuning. Specifically, as the energy increases, the Euclidean and cosine distances between the resulting and original features increase, while their dot product (and the resulting features’ norm) first increases and then decreases. Inspired by this, we give several practical principles that lead to better downstream performance. We analytically prove this trend in an overparamterized linear setting and verify its applicability to different experimental settings.
https://openreview.net/pdf/733e62f9bacedec2adc398eebb9457397b5a0713.pdf
DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
https://openreview.net/forum?id=jQj-_rLVXsj
https://openreview.net/forum?id=jQj-_rLVXsj
Shansan Gong,Mukai Li,Jiangtao Feng,Zhiyong Wu,Lingpeng Kong
ICLR 2023,Poster
Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at https://github.com/Shark-NLP/DiffuSeq
https://openreview.net/pdf/60eecf7c181638fdfa60c671ca5d0b67644748cd.pdf
Policy Expansion for Bridging Offline-to-Online Reinforcement Learning
https://openreview.net/forum?id=-Y34L45JR6z
https://openreview.net/forum?id=-Y34L45JR6z
Haichao Zhang,Wei Xu,Haonan Yu
ICLR 2023,Poster
Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline. In this work, we introduce a policy expansion scheme for this task. After learning the offline policy, we use it as one candidate policy in a policy set, and further learn another policy that will be responsible for further learning as an expansion to the policy set. The two policies will be composed in an adaptive manner for interacting with the environment. With this approach, the policy previously learned offline is fully retained during online learning, thus mitigating the potential issues such as destroying the useful behaviors of the offline policy in the initial stage of online learning while allowing the offline policy participate in the exploration naturally in an adaptive manner. Moreover, new useful behaviors can potentially be captured by the newly added policy through learning. Experiments are conducted on a number of tasks and the results demonstrate the effectiveness of the proposed approach.
https://openreview.net/pdf/bc171436c99bb31ea2d883553ed3204db027ced8.pdf
Mitigating Memorization of Noisy Labels via Regularization between Representations
https://openreview.net/forum?id=6qcYDVlVLnK
https://openreview.net/forum?id=6qcYDVlVLnK
Hao Cheng,Zhaowei Zhu,Xing Sun,Yang Liu
ICLR 2023,Poster
Designing robust loss functions is popular in learning with noisy labels while existing designs did not explicitly consider the overfitting property of deep neural networks (DNNs). As a result, applying these losses may still suffer from overfitting/memorizing noisy labels as training proceeds. In this paper, we first theoretically analyze the memorization effect and show that a lower-capacity model may perform better on noisy datasets. However, it is non-trivial to design a neural network with the best capacity given an arbitrary task. To circumvent this dilemma, instead of changing the model architecture, we decouple DNNs into an encoder followed by a linear classifier and propose to restrict the function space of a DNN by a representation regularizer. Particularly, we require the distance between two self-supervised features to be positively related to the distance between the corresponding two supervised model outputs. Our proposed framework is easily extendable and can incorporate many other robust loss functions to further improve performance. Extensive experiments and theoretical analyses support our claims. Code is available at https://github.com/UCSC-REAL/SelfSup_NoisyLabel.
https://openreview.net/pdf/67fbfc36cf99ba2642db423695e9c98a9b281f14.pdf
Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs
https://openreview.net/forum?id=dqnNW2omZL6
https://openreview.net/forum?id=dqnNW2omZL6
Chenxiao Yang,Qitian Wu,Jiahua Wang,Junchi Yan
ICLR 2023,Poster
Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes. While conventional wisdom commonly attributes the success of GNNs to their advanced expressivity, we conjecture that this is not the main cause of GNNs' superiority in node-level prediction tasks. This paper pinpoints the major source of GNNs' performance gain to their intrinsic generalization capability, by introducing an intermediate model class dubbed as P(ropagational)MLP, which is identical to standard MLP in training, but then adopts GNN's architecture in testing. Intriguingly, we observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training. This finding provides a new perspective for understanding the learning behavior of GNNs, and can be used as an analytic tool for dissecting various GNN-related research problems including expressivity, generalization, over-smoothing and heterophily. As an initial step to analyze PMLP, we show its essential difference to MLP at infinite-width limit lies in the NTK feature map in the post-training stage. Moreover, through extrapolation analysis (i.e., generalization under distribution shifts), we find that though most GNNs and their PMLP counterparts cannot extrapolate non-linear functions for extreme out-of-distribution data, they have greater potential to generalize to testing data near the training data support as natural advantages of the GNN architecture used for inference.
https://openreview.net/pdf/46a2475b833d9631645d3d04015622e6eecb0cf0.pdf
Learning Cut Selection for Mixed-Integer Linear Programming via Hierarchical Sequence Model
https://openreview.net/forum?id=Zob4P9bRNcK
https://openreview.net/forum?id=Zob4P9bRNcK
Zhihai Wang,Xijun Li,Jie Wang,Yufei Kuang,Mingxuan Yuan,Jia Zeng,Yongdong Zhang,Feng Wu
ICLR 2023,Poster
Cutting planes (cuts) are important for solving mixed-integer linear programs (MILPs), which formulate a wide range of important real-world applications. Cut selection---which aims to select a proper subset of the candidate cuts to improve the efficiency of solving MILPs---heavily depends on (P1) which cuts should be preferred, and (P2) how many cuts should be selected. Although many modern MILP solvers tackle (P1)-(P2) by manually designed heuristics, machine learning offers a promising approach to learn more effective heuristics from MILPs collected from specific applications. However, many existing learning-based methods focus on learning which cuts should be preferred, neglecting the importance of learning the number of cuts that should be selected. Moreover, we observe from extensive empirical results that (P3) what order of selected cuts should be preferred has a significant impact on the efficiency of solving MILPs as well. To address this challenge, we propose a novel hierarchical sequence model (HEM) to learn cut selection policies via reinforcement learning. Specifically, HEM consists of a two-level model: (1) a higher-level model to learn the number of cuts that should be selected, (2) and a lower-level model---that formulates the cut selection task as a sequence to sequence learning problem---to learn policies selecting an ordered subset with the size determined by the higher-level model. To the best of our knowledge, HEM is the first method that can tackle (P1)-(P3) in cut selection simultaneously from a data-driven perspective. Experiments show that HEM significantly improves the efficiency of solving MILPs compared to human-designed and learning-based baselines on both synthetic and large-scale real-world MILPs, including MIPLIB 2017. Moreover, experiments demonstrate that HEM well generalizes to MILPs that are significantly larger than those seen during training.
https://openreview.net/pdf/6885b5f02c9a39764dee43349192398b48a69fd5.pdf
BSTT: A Bayesian Spatial-Temporal Transformer for Sleep Staging
https://openreview.net/forum?id=ZxdkjTgK_Dl
https://openreview.net/forum?id=ZxdkjTgK_Dl
Yuchen Liu,Ziyu Jia
ICLR 2023,Poster
Sleep staging is helpful in assessing sleep quality and diagnosing sleep disorders. However, how to adequately capture the temporal and spatial relations of the brain during sleep remains a challenge. In particular, existing methods cannot adaptively infer spatial-temporal relations of the brain under different sleep stages. In this paper, we propose a novel Bayesian spatial-temporal relation inference neural network, named Bayesian spatial-temporal transformer (BSTT), for sleep staging. Our model is able to adaptively infer brain spatial-temporal relations during sleep for spatial-temporal feature modeling through a well-designed Bayesian relation inference component. Meanwhile, our model also includes a spatial transformer for extracting brain spatial features and a temporal transformer for capturing temporal features. Experiments show that our BSTT outperforms state-of-the-art baselines on ISRUC and MASS datasets. In addition, the visual analysis shows that the spatial-temporal relations obtained by BSTT inference have certain interpretability for sleep staging.
https://openreview.net/pdf/784a89e23b8d870b4b7d5f396e930c6d4634f2d9.pdf
Improving Deep Policy Gradients with Value Function Search
https://openreview.net/forum?id=6qZC7pfenQm
https://openreview.net/forum?id=6qZC7pfenQm
Enrico Marchesini,Christopher Amato
ICLR 2023,Poster
Deep Policy Gradient (PG) algorithms employ value networks to drive the learning of parameterized policies and reduce the variance of the gradient estimates. However, value function approximation gets stuck in local optima and struggles to fit the actual return, limiting the variance reduction efficacy and leading policies to sub-optimal performance. This paper focuses on improving value approximation and analyzing the effects on Deep PG primitives such as value prediction, variance reduction, and correlation of gradient estimates with the true gradient. To this end, we introduce a Value Function Search that employs a population of perturbed value networks to search for a better approximation. Our framework does not require additional environment interactions, gradient computations, or ensembles, providing a computationally inexpensive approach to enhance the supervised learning task on which value networks train. Crucially, we show that improving Deep PG primitives results in improved sample efficiency and policies with higher returns using common continuous control benchmark domains.
https://openreview.net/pdf/f7835d07853f4262b59ffe0f6dde2458de1d5eed.pdf
MEDICAL IMAGE UNDERSTANDING WITH PRETRAINED VISION LANGUAGE MODELS: A COMPREHENSIVE STUDY
https://openreview.net/forum?id=txlWziuCE5W
https://openreview.net/forum?id=txlWziuCE5W
Ziyuan Qin,Huahui Yi,Qicheng Lao,Kang Li
ICLR 2023,Poster
The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images. However, it remains unknown whether this capability can also apply to the medical image domain. This paper thoroughly studies the knowledge transferability of pre-trained VLMs to the medical domain, where we show that well-designed medical prompts are the key to elicit knowledge from pre-trained VLMs. We demonstrate that by prompting with expressive attributes that are shared between domains, the VLM can carry the knowledge across domains and improve its generalization. This mechanism empowers VLMs to recognize novel objects with fewer or without image samples. Furthermore, to avoid the laborious manual designing process, we develop three approaches for automatic generation of medical prompts, which can inject expert-level medical knowledge and image-specific information into the prompts for fine-grained grounding. We conduct extensive experiments on thirteen different medical datasets across various modalities, showing that our well-designed prompts greatly improve the zero-shot performance compared to the default prompts, and our fine-tuned models surpass the supervised models by a significant margin.
https://openreview.net/pdf/8e53cd494ff16bfef607704574e7a1e2c770f607.pdf
Temporal Coherent Test Time Optimization for Robust Video Classification
https://openreview.net/forum?id=-t4D61w4zvQ
https://openreview.net/forum?id=-t4D61w4zvQ
Chenyu Yi,SIYUAN YANG,Yufei Wang,Haoliang Li,Yap-peng Tan,Alex Kot
ICLR 2023,Poster
Deep neural networks are likely to fail when the test data is corrupted in real-world deployment (e.g., blur, weather, etc.). Test-time optimization is an effective way that adapts models to generalize to corrupted data during testing, which has been shown in the image domain. However, the techniques for improving video classification corruption robustness remain few. In this work, we propose a Temporal Coherent Test-time Optimization framework (TeCo) to utilize spatio-temporal information in test-time optimization for robust video classification. To exploit information in video with self-supervised learning, TeCo minimizes the entropy of the prediction based on the global content from video clips. Meanwhile, it also feeds local content to regularize the temporal coherence at the feature level. TeCo retains the generalization ability of various video classification models and achieves significant improvements in corruption robustness across Mini Kinetics-C and Mini SSV2-C. Furthermore, TeCo sets a new baseline in video classification corruption robustness via test-time optimization.
https://openreview.net/pdf/ab4b8621af83bb5da1dea635a2e727489c8345d5.pdf
A Learning Based Hypothesis Test for Harmful Covariate Shift
https://openreview.net/forum?id=rdfgqiwz7lZ
https://openreview.net/forum?id=rdfgqiwz7lZ
Tom Ginsberg,Zhongyuan Liang,Rahul G Krishnan
ICLR 2023,Poster
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.
https://openreview.net/pdf/8f9de31673b00462ace0dc530187edebaca27efa.pdf
Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation
https://openreview.net/forum?id=NPrsUQgMjKK
https://openreview.net/forum?id=NPrsUQgMjKK
Bobby He,James Martens,Guodong Zhang,Aleksandar Botev,Andrew Brock,Samuel L Smith,Yee Whye Teh
ICLR 2023,Poster
Skip connections and normalisation layers form two standard architectural components that are ubiquitous for the training of Deep Neural Networks (DNNs), but whose precise roles are poorly understood. Recent approaches such as Deep Kernel Shaping have made progress towards reducing our reliance on them, using insights from wide NN kernel theory to improve signal propagation in vanilla DNNs (which we define as networks without skips or normalisation). However, these approaches are incompatible with the self-attention layers present in transformers, whose kernels are intrinsically more complicated to analyse and control. And so the question remains: \emph{is it possible to train deep vanilla transformers?} We answer this question in the affirmative by designing several approaches that use combinations of parameter initialisations, bias matrices and location-dependent rescaling to achieve faithful signal propagation in vanilla transformers. Our methods address various intricacies specific to signal propagation in transformers, including the interaction with positional encoding and causal masking. In experiments on WikiText-103 and C4, our approaches enable deep transformers without normalisation to train at speeds matching their standard counterparts, and deep vanilla transformers to reach the same performance as standard ones after about 5 times more iterations.
https://openreview.net/pdf/d15d49c0b149d81687f6d614243e28d4ed39ccb5.pdf
Self-Supervised Geometric Correspondence for Category-Level 6D Object Pose Estimation in the Wild
https://openreview.net/forum?id=ZKDUlVMqG_O
https://openreview.net/forum?id=ZKDUlVMqG_O
Kaifeng Zhang,Yang Fu,Shubhankar Borse,Hong Cai,Fatih Porikli,Xiaolong Wang
ICLR 2023,Poster
While 6D object pose estimation has wide applications across computer vision and robotics, it remains far from being solved due to the lack of annotations. The problem becomes even more challenging when moving to category-level 6D pose, which requires generalization to unseen instances. Current approaches are restricted by leveraging annotations from simulation or collected from humans. In this paper, we overcome this barrier by introducing a self-supervised learning approach trained directly on large-scale real-world object videos for category-level 6D pose estimation in the wild. Our framework reconstructs the canonical 3D shape of an object category and learns dense correspondences between input images and the canonical shape via surface embedding. For training, we propose novel geometrical cycle-consistency losses which construct cycles across 2D-3D spaces, across different instances and different time steps. The learned correspondence can be applied for 6D pose estimation and other downstream tasks such as keypoint transfer. Surprisingly, our method, without any human annotations or simulators, can achieve on-par or even better performance than previous supervised or semi-supervised methods on in-the-wild images. Code and videos are available at https://kywind.github.io/self-pose.
https://openreview.net/pdf/c3801a183c4d295df0b3a3a30a643f802d45d6fb.pdf
Non-parametric Outlier Synthesis
https://openreview.net/forum?id=JHklpEZqduQ
https://openreview.net/forum?id=JHklpEZqduQ
Leitian Tao,Xuefeng Du,Jerry Zhu,Yixuan Li
ICLR 2023,Poster
Out-of-distribution (OOD) detection is indispensable for safely deploying machine learning models in the wild. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Recent work on outlier synthesis modeled the feature space as parametric Gaussian distribution, a strong and restrictive assumption that might not hold in reality. In this paper, we propose a novel framework, non-parametric outlier synthesis (NPOS), which generates artificial OOD training data and facilitates learning a reliable decision boundary between ID and OOD data. Importantly, our proposed synthesis approach does not make any distributional assumption on the ID embeddings, thereby offering strong flexibility and generality. We show that our synthesis approach can be mathematically interpreted as a rejection sampling framework. Extensive experiments show that NPOS can achieve superior OOD detection performance, outperforming the competitive rivals by a significant margin. Code is publicly available at https://github.com/deeplearning-wisc/npos.
https://openreview.net/pdf/cad83237af6ddb0e9d1ec444c3e8c113cb2d4916.pdf
Approximation and non-parametric estimation of functions over high-dimensional spheres via deep ReLU networks
https://openreview.net/forum?id=r90KYcuB7JS
https://openreview.net/forum?id=r90KYcuB7JS
Namjoon Suh,Tian-Yi Zhou,Xiaoming Huo
ICLR 2023,Poster
We develop a new approximation and estimation analysis of deep feed-forward neural networks (FNNs) with the Rectified Linear Unit (ReLU) activation. The functions of interests for the approximation and estimation are assumed to be from Sobolev spaces defined over the $d$-dimensional unit sphere with smoothness index $r>0$. In the regime where $r$ is in the constant order (i.e., $r=\mathcal{O}(1)$), it is shown that at most $d^d$ active parameters are required for getting $d^{-C}$ approximation rate for some constant $C>0$. In contrast, in the regime where the index $r$ grows in the order of $d$ (i.e., $r=\mathcal{O}(d)$) asymptotically, we prove the approximation error decays in the rate $d^{-d^{\beta}}$ with $0<\beta<1$ up to some constant factor independent of $d$. The required number of active parameters in the networks for the approximation increases polynomially in $d$ as $d\rightarrow{\infty}$. In addition to this, it is shown that bound on the excess risk has a $d^d$ factor, when $r=\mathcal{O}(1)$, whereas it has $d^{\mathcal{O}(1)}$ factor, when $r=\mathcal{O}(d)$. We emphasize our findings by making comparisons to the results on approximation and estimation errors of deep ReLU FNN when functions are from Sobolev spaces defined over $d$-dimensional cube. Here, we show that with the current state-of-the-art result, $d^{d}$ factor remain both in the approximation and estimation error, regardless of the order of $r$.
https://openreview.net/pdf/5889ac7aacf5a7265bdf58bff9ce06565f27fe79.pdf
Learning Adversarial Linear Mixture Markov Decision Processes with Bandit Feedback and Unknown Transition
https://openreview.net/forum?id=sVU54nyaA9K
https://openreview.net/forum?id=sVU54nyaA9K
Canzhe Zhao,Ruofeng Yang,Baoxiang Wang,Shuai Li
ICLR 2023,Poster
We study reinforcement learning (RL) with linear function approximation, unknown transition, and adversarial losses in the bandit feedback setting. Specifically, the unknown transition probability function is a linear mixture model \citep{AyoubJSWY20,ZhouGS21,HeZG22} with a given feature mapping, and the learner only observes the losses of the experienced state-action pairs instead of the whole loss function. We propose an efficient algorithm LSUOB-REPS which achieves $\widetilde{O}(dS^2\sqrt{K}+\sqrt{HSAK})$ regret guarantee with high probability, where $d$ is the ambient dimension of the feature mapping, $S$ is the size of the state space, $A$ is the size of the action space, $H$ is the episode length and $K$ is the number of episodes. Furthermore, we also prove a lower bound of order $\Omega(dH\sqrt{K}+\sqrt{HSAK})$ for this setting. To the best of our knowledge, we make the first step to establish a provably efficient algorithm with a sublinear regret guarantee in this challenging setting and solve the open problem of \citet{HeZG22}.
https://openreview.net/pdf/33f9690f207885a8455b64fd5e907d05a6a5a778.pdf
Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection
https://openreview.net/forum?id=4yqxDCbzS98
https://openreview.net/forum?id=4yqxDCbzS98
Martijn Oldenhof,Adam Arany,Yves Moreau,Edward De Brouwer
ICLR 2023,Poster
Training object detection models usually requires instance-level annotations, such as the positions and labels of all objects present in each image. Such supervision is unfortunately not always available and, more often, only image-level information is provided, also known as weak supervision. Recent works have addressed this limitation by leveraging knowledge from a richly annotated domain. However, the scope of weak supervision supported by these approaches has been very restrictive, preventing them to use all available information. In this work, we propose ProbKT, a framework based on probabilistic logical reasoning to train object detection models with arbitrary types of weak supervision. We empirically show on different datasets that using all available information is beneficial as our ProbKT leads to significant improvement on target domain and better generalisation compared to existing baselines. We also showcase the ability of our approach to handle complex logic statements as supervision signal.
https://openreview.net/pdf/96554041d877331f4843ddfa203440a146079c75.pdf
A Neural Mean Embedding Approach for Back-door and Front-door Adjustment
https://openreview.net/forum?id=rLguqxYvYHB
https://openreview.net/forum?id=rLguqxYvYHB
Liyuan Xu,Arthur Gretton
ICLR 2023,Poster
We consider the estimation of average and counterfactual treatment effects, under two settings: back-door adjustment and front-door adjustment. The goal in both cases is to recover the treatment effect without having an access to a hidden confounder. This objective is attained by first estimating the conditional mean of the desired outcome variable given relevant covariates (the ``first stage" regression), and then taking the (conditional) expectation of this function as a ``second stage" procedure. We propose to compute these conditional expectations directly using a regression function to the learned input features of the first stage, thus avoiding the need for sampling or density estimation. All functions and features (and in particular, the output features in the second stage) are neural networks learned adaptively from data, with the sole requirement that the final layer of the first stage should be linear. The proposed method is shown to converge to the true causal parameter, and outperforms the recent state-of-the-art methods on challenging causal benchmarks, including settings involving high-dimensional image data.
https://openreview.net/pdf/d11e23e4e0bf5919aa0d282ad55e5be78154c397.pdf
TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
https://openreview.net/forum?id=UVAmFAtC5ye
https://openreview.net/forum?id=UVAmFAtC5ye
Rongjie Huang,Jinglin Liu,Huadai Liu,Yi Ren,Lichao Zhang,Jinzheng He,Zhou Zhao
ICLR 2023,Poster
Direct speech-to-speech translation (S2ST) with discrete units leverages recent progress in speech representation learning. Specifically, a sequence of discrete representations derived in a self-supervised manner are predicted from the model and passed to a vocoder for speech reconstruction, while still facing the following challenges: 1) Acoustic multimodality: the discrete units derived from speech with same content could be indeterministic due to the acoustic property (e.g., rhythm, pitch, and energy), which causes deterioration of translation accuracy; 2) high latency: current S2ST systems utilize autoregressive models which predict each unit conditioned on the sequence previously generated, failing to take full advantage of parallelism. In this work, we propose TranSpeech, a speech-to-speech translation model with bilateral perturbation. To alleviate the acoustic multimodal problem, we propose bilateral perturbation (BiP), which consists of the style normalization and information enhancement stages, to learn only the linguistic information from speech samples and generate more deterministic representations. With reduced multimodality, we step forward and become the first to establish a non-autoregressive S2ST technique, which repeatedly masks and predicts unit choices and produces high-accuracy results in just a few cycles. Experimental results on three language pairs demonstrate that BiP yields an improvement of 2.9 BLEU on average compared with a baseline textless S2ST model. Moreover, our parallel decoding shows a significant reduction of inference latency, enabling speedup up to 21.4x than autoregressive technique. Audio samples are available at https://TranSpeech.github.io
https://openreview.net/pdf/616759f9860441b1cc2f980b7e2becb2afd49833.pdf
Over-parameterized Model Optimization with Polyak-{\L}ojasiewicz Condition
https://openreview.net/forum?id=aBIpZvMdS56
https://openreview.net/forum?id=aBIpZvMdS56
Yixuan Chen,Yubin Shi,Mingzhi Dong,Xiaochen Yang,Dongsheng Li,Yujiang Wang,Robert P. Dick,Qin Lv,Yingying Zhao,Fan Yang,Ning Gu,Li Shang
ICLR 2023,Poster
This work pursues the optimization of over-parameterized deep models for superior training efficiency and test performance. We first theoretically emphasize the importance of two properties of over-parameterized models, i.e., the convergence gap and the generalization gap. Subsequent analyses unveil that these two gaps can be upper-bounded by the ratio of the Lipschitz constant and the Polyak-{\L}ojasiewicz (PL) constant, a crucial term abbreviated as the \emph{condition number}. Such discoveries have led to a structured pruning method with a novel pruning criterion. That is, we devise a gating network that dynamically detects and masks out those poorly-behaved nodes of a deep model during the training session. To this end, this gating network is learned via minimizing the \emph{condition number} of the target model, and this process can be implemented as an extra regularization loss term. Experimental studies demonstrate that the proposed method outperforms the baselines in terms of both training efficiency and test performance, exhibiting the potential of generalizing to a variety of deep network architectures and tasks.
https://openreview.net/pdf/91751e40b5fb0dc4d26a1b1a7b6d6c148a803488.pdf
Jointly Learning Visual and Auditory Speech Representations from Raw Data
https://openreview.net/forum?id=BPwIgvf5iQ
https://openreview.net/forum?id=BPwIgvf5iQ
Alexandros Haliassos,Pingchuan Ma,Rodrigo Mira,Stavros Petridis,Maja Pantic
ICLR 2023,Poster
We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low- and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. Code and models are available at https://github.com/ahaliassos/raven.
https://openreview.net/pdf/309f109b8dacce5715eeb3408e76860321dc637a.pdf
Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning
https://openreview.net/forum?id=H4Ncs5jhTCu
https://openreview.net/forum?id=H4Ncs5jhTCu
Daniel Palenicek,Michael Lutter,Joao Carvalho,Jan Peters
ICLR 2023,Poster
Model-based reinforcement learning is one approach to increase sample efficiency. However, the accuracy of the dynamics model and the resulting compounding error over modelled trajectories are commonly regarded as key limitations. A natural question to ask is: How much more sample efficiency can be gained by improving the learned dynamics models? Our paper empirically answers this question for the class of model-based value expansion methods in continuous control problems. Value expansion methods should benefit from increased model accuracy by enabling longer rollout horizons and better value function approximations. Our empirical study, which leverages oracle dynamics models to avoid compounding model errors, shows that (1) longer horizons increase sample efficiency, but the gain in improvement decreases with each additional expansion step, and (2) the increased model accuracy only marginally increases the sample efficiency compared to learned models with identical horizons. Therefore, longer horizons and increased model accuracy yield diminishing returns in terms of sample efficiency. These improvements in sample efficiency are particularly disappointing when compared to model-free value expansion methods. Even though they introduce no computational overhead, we find their performance to be on-par with model-based value expansion methods. Therefore, we conclude that the limitation of model-based value expansion methods is not the model accuracy of the learned models. While higher model accuracy is beneficial, our experiments show that even a perfect model will not provide an un-rivaled sample efficiency but that the bottleneck lies elsewhere.
https://openreview.net/pdf/8b53504b104f38c8d1f9cc40263dff77628eedee.pdf
CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Alignment
https://openreview.net/forum?id=GNjzMAgawq
https://openreview.net/forum?id=GNjzMAgawq
Hongwei Xue,Yuchong Sun,Bei Liu,Jianlong Fu,Ruihua Song,Houqiang Li,Jiebo Luo
ICLR 2023,Poster
Pre-trained image-text models, like CLIP, have demonstrated the strong power of vision-language representation learned from a large scale of web-collected image-text data. In light of the well-learned visual features, there are works that transfer image representation to the video domain and achieve good results. However, adapting image-text pre-trained models to video-text pre-training (i.e., post-pretraining) has not demonstrated a significant advantage yet. In this paper, we tackle this challenge by raising and addressing two questions: 1) what are the factors hindering post-pretraining CLIP from improving performance on video-text tasks, and 2) how to mitigate the impact of these factors. Through a series of comparative experiments and analyses, we find that the data scale and domain gap between language sources have large impacts. By these observations, we propose an Omnisource Cross-modal Learning method equipped with a Video Proxy mechanism on the basis of CLIP, namely CLIP-ViP. Extensive results show that our approach improves the performance of CLIP on video-text retrieval by a large margin. Our model achieves state-of-the-art results on a variety of datasets, including MSR-VTT, DiDeMo, LSMDC, and ActivityNet. We release our code and pre-trained CLIP-ViP models at \url{https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP}.
https://openreview.net/pdf/f8c079d34aee5b9409dbf8a160ba5d1d8b547b1f.pdf
Equivariant Energy-Guided SDE for Inverse Molecular Design
https://openreview.net/forum?id=r0otLtOwYW
https://openreview.net/forum?id=r0otLtOwYW
Fan Bao,Min Zhao,Zhongkai Hao,Peiyao Li,Chongxuan Li,Jun Zhu
ICLR 2023,Poster
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties. In this paper, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. Empirically, under the guidance of designed energy functions, EEGSDE significantly improves the baseline on QM9, in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
https://openreview.net/pdf/7eccd68c8c19051056b77215ff617061615b0e5c.pdf
On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning
https://openreview.net/forum?id=KB1sc5pNKFv
https://openreview.net/forum?id=KB1sc5pNKFv
Yifan Xu,Nicklas Hansen,Zirui Wang,Yung-Chieh Chan,Hao Su,Zhuowen Tu
ICLR 2023,Poster
Reinforcement Learning (RL) algorithms can solve challenging control problems directly from image observations, but they often require millions of environment interactions to do so. Recently, model-based RL algorithms have greatly improved sample-efficiency by concurrently learning an internal model of the world, and supplementing real environment interactions with imagined rollouts for policy improvement. However, learning an effective model of the world from scratch is challenging, and in stark contrast to humans that rely heavily on world understanding and visual cues for learning new skills. In this work, we investigate whether internal models learned by modern model-based RL algorithms can be leveraged to solve new, distinctly different tasks faster. We propose Model-Based Cross-Task Transfer (XTRA), a framework for sample-efficient online RL with scalable pretraining and finetuning of learned world models. By offline multi-task pretraining and online cross-task finetuning, we achieve substantial improvements over a baseline trained from scratch; we improve mean performance of model-based algorithm EfficientZero by 23%, and by as much as 71% in some instances. Project page: https://nicklashansen.github.io/xtra
https://openreview.net/pdf/989f4c93e3c9b7bc38369560e925cdfc8ce7b1ed.pdf
A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles
https://openreview.net/forum?id=IVESH65r0Ar
https://openreview.net/forum?id=IVESH65r0Ar
Seohyeon Jung,Sanghyun Kim,Juho Lee
ICLR 2023,Poster
Given an unlabeled pool of data and the experts who can label them, active learning aims to build an agent that can effectively acquire data to be queried to the experts, maximizing the gain in performance when trained with them. While there are several principles for active learning, a prevailing approach is to estimate uncertainties of predictions for unlabeled samples and use them to define acquisition functions. Active learning with the uncertainty principle works well for deep learning, especially for large-scale image classification tasks with deep neural networks. Still, it is often overlooked how the uncertainty of predictions is estimated, despite the common findings on the difficulty of accurately estimating uncertainties of deep neural networks. In this paper, we highlight the effectiveness of snapshot ensembles for deep active learning. Compared to the previous approaches based on Monte-Carlo dropout or deep ensembles, we show that a simple acquisition strategy based on uncertainties estimated from parameter snapshots gathered from a single optimization path significantly improves the quality of the acquired samples. Based on this observation, we further propose an efficient active learning algorithm that maintains a single learning trajectory throughout the entire active learning episodes, unlike the existing algorithms training models from scratch for every active learning episode. Through the extensive empirical comparison, we demonstrate the effectiveness of snapshot ensembles for deep active learning.
https://openreview.net/pdf/4b4df3d988ecc07d73d78a6a4063cc0c3153a2aa.pdf
Decoupled Training for Long-Tailed Classification With Stochastic Representations
https://openreview.net/forum?id=bcYZwYo-0t
https://openreview.net/forum?id=bcYZwYo-0t
Giung Nam,Sunguk Jang,Juho Lee
ICLR 2023,Poster
Decoupling representation learning and classifier learning has been shown to be effective in classification with long-tailed data. There are two main ingredients in constructing a decoupled learning scheme; 1) how to train the feature extractor for representation learning so that it provides generalizable representations and 2) how to re-train the classifier that constructs proper decision boundaries by handling class imbalances in long-tailed data. In this work, we first apply Stochastic Weight Averaging (SWA), an optimization technique for improving the generalization of deep neural networks, to obtain better generalizing feature extractors for long-tailed classification. We then propose a novel classifier re-training algorithm based on stochastic representation obtained from the SWA-Gaussian, a Gaussian perturbed SWA, and a self-distillation strategy that can harness the diverse stochastic representations based on uncertainty estimates to build more robust classifiers. Extensive experiments on CIFAR10/100-LT, ImageNet-LT, and iNaturalist-2018 benchmarks show that our proposed method improves upon previous methods both in terms of prediction accuracy and uncertainty estimation.
https://openreview.net/pdf/18f57e331e1ebd747dd0054541bbe9d3c88ae48d.pdf
ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View Semantic Consistency
https://openreview.net/forum?id=2XLRBjY46O6
https://openreview.net/forum?id=2XLRBjY46O6
Pengzhen Ren,Changlin Li,Hang Xu,Yi Zhu,Guangrun Wang,Jianzhuang Liu,Xiaojun Chang,Xiaodan Liang
ICLR 2023,Poster
Recently, great success has been made in learning visual representations from text supervision, facilitating the emergence of text-supervised semantic segmentation. However, existing works focus on pixel grouping and cross-modal semantic alignment, while ignoring the correspondence among multiple augmented views of the same image. To overcome such limitation, we propose multi-View Consistent learning (ViewCo) for text-supervised semantic segmentation. Specifically, we first propose text-to-views consistency modeling to learn correspondence for multiple views of the same input image. Additionally, we propose cross-view segmentation consistency modeling to address the ambiguity issue of text supervision by contrasting the segment features of Siamese visual encoders. The text-to-views consistency benefits dense assignment of the visual features by encouraging different crops to align with the same text, while the cross-view segmentation consistency modeling provides additional self-supervision, overcoming the limitation of ambiguous text supervision for segmentation masks. Trained with large-scale image-text data, our model can directly segment objects of arbitrary categories in a zero-shot manner. Extensive experiments show that ViewCo outperforms state-of-the-art methods on average by up to 2.9%, 1.6%, and 2.4% mIoU on PASCAL VOC2012, PASCAL Context, and COCO, respectively.
https://openreview.net/pdf/1c4d99da565a5a77b48c93ace235f1f1c7922953.pdf
Benchmarking Constraint Inference in Inverse Reinforcement Learning
https://openreview.net/forum?id=vINj_Hv9szL
https://openreview.net/forum?id=vINj_Hv9szL
Guiliang Liu,Yudong Luo,Ashish Gaurav,Kasra Rezaee,Pascal Poupart
ICLR 2023,Poster
When deploying Reinforcement Learning (RL) agents into a physical system, we must ensure that these agents are well aware of the underlying constraints. In many real-world problems, however, the constraints are often hard to specify mathematically and unknown to the RL agents. To tackle these issues, Inverse Constrained Reinforcement Learning (ICRL) empirically estimates constraints from expert demonstrations. As an emerging research topic, ICRL does not have common benchmarks, and previous works tested algorithms under hand-crafted environments with manually-generated expert demonstrations. In this paper, we construct an ICRL benchmark in the context of RL application domains, including robot control, and autonomous driving. For each environment, we design relevant constraints and train expert agents to generate demonstration data. Besides, unlike existing baselines that learn a deterministic constraint, we propose a variational ICRL method to model a posterior distribution of candidate constraints. We conduct extensive experiments on these algorithms under our benchmark and show how they can facilitate studying important research challenges for ICRL. The benchmark, including the instructions for reproducing ICRL algorithms, is available at https://github.com/Guiliang/ICRL-benchmarks-public.
https://openreview.net/pdf/293f3f980a964c27fc56091298401364387afced.pdf
Memory Gym: Partially Observable Challenges to Memory-Based Agents
https://openreview.net/forum?id=jHc8dCx6DDr
https://openreview.net/forum?id=jHc8dCx6DDr
Marco Pleines,Matthias Pallasch,Frank Zimmer,Mike Preuss
ICLR 2023,Poster
Memory Gym is a novel benchmark for challenging Deep Reinforcement Learning agents to memorize events across long sequences, be robust to noise, and generalize. It consists of the partially observable 2D and discrete control environments Mortar Mayhem, Mystery Path, and Searing Spotlights. These environments are believed to be unsolvable by memory-less agents because they feature strong dependencies on memory and frequent agent-memory interactions. Empirical results based on Proximal Policy Optimization (PPO) and Gated Recurrent Unit (GRU) underline the strong memory dependency of the contributed environments. The hardness of these environments can be smoothly scaled, while different levels of difficulty (some of them unsolved yet) emerge for Mortar Mayhem and Mystery Path. Surprisingly, Searing Spotlights poses a tremendous challenge to GRU-PPO, which remains an open puzzle. Even though the randomly moving spotlights reveal parts of the environment’s ground truth, environmental ablations hint that these pose a severe perturbation to agents that leverage recurrent model architectures as their memory. Source Code: https://github.com/MarcoMeter/drl-memory-gym/
https://openreview.net/pdf/311f37d9f91d2b654e7ef5b66aab43a60b5f0e8b.pdf
Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
https://openreview.net/forum?id=kjkdzBW3b8p
https://openreview.net/forum?id=kjkdzBW3b8p
Tom Zahavy,Yannick Schroecker,Feryal Behbahani,Kate Baumli,Sebastian Flennerhag,Shaobo Hou,Satinder Singh
ICLR 2023,Poster
In this work we propose a Reinforcement Learning (RL) agent that can discover complex behaviours in a rich environment with a simple reward function. We define diversity in terms of state-action occupancy measures, since policies with different occupancy measures visit different states on average. More importantly, defining diversity in this way allows us to derive an intrinsic reward function for maximizing the diversity directly. Our agent, DOMiNO, stands for Diversity Optimization Maintaining Near Optimally. It is based on maximizing a reward function with two components: the extrinsic reward and the diversity intrinsic reward, which are combined with Lagrange multipliers to balance the quality-diversity trade-off. Any RL algorithm can be used to maximize this reward and no other changes are needed. We demonstrate that given a simple reward functions in various control domains, like height (stand) and forward velocity (walk), DOMiNO discovers diverse and meaningful behaviours. We also perform extensive analysis of our approach, compare it with other multi-objective baselines, demonstrate that we can control both the quality and the diversity of the set via interpretable hyperparameters, and show that the set is robust to perturbations of the environment.
https://openreview.net/pdf/6afcd9948e9a8f36dcac913dfe67d5311fc6c5df.pdf
SpeedyZero: Mastering Atari with Limited Data and Time
https://openreview.net/forum?id=Mg5CLXZgvLJ
https://openreview.net/forum?id=Mg5CLXZgvLJ
Yixuan Mei,Jiaxuan Gao,Weirui Ye,Shaohuai Liu,Yang Gao,Yi Wu
ICLR 2023,Poster
Many recent breakthroughs of deep reinforcement learning (RL) are mainly built upon large-scale distributed training of model-free methods using millions to billions of samples. On the other hand, state-of-the-art model-based RL methods can achieve human-level sample efficiency but often take a much longer over all training time than model-free methods. However, high sample efficiency and fast training time are both important to many real-world applications. We develop SpeedyZero, a distributed RL system built upon a state-of-the-art model-based RL method, EfficientZero, with a dedicated system design for fast distributed computation. We also develop two novel algorithmic techniques, Priority Refresh and Clipped LARS, to stabilize training with massive parallelization and large batch size. SpeedyZero maintains on-par sample efficiency compared with EfficientZero while achieving a 14.5X speedup in wall-clock time, leading to human-level performances on the Atari benchmark within 35 minutes using only 300k samples. In addition, we also present an in-depth analysis on the fundamental challenges in further scaling our system to bring insights to the community.
https://openreview.net/pdf/93caec5d1353b3d4a35a3efed19a2a836a1c6238.pdf
Neural Architecture Design and Robustness: A Dataset
https://openreview.net/forum?id=p8coElqiSDw
https://openreview.net/forum?id=p8coElqiSDw
Steffen Jung,Jovita Lukasik,Margret Keuper
ICLR 2023,Poster
Deep learning models have proven to be successful in a wide range of machine learning tasks. Yet, they are often highly sensitive to perturbations on the input data which can lead to incorrect decisions with high confidence, hampering their deployment for practical use-cases. Thus, finding architectures that are (more) robust against perturbations has received much attention in recent years. Just like the search for well-performing architectures in terms of clean accuracy, this usually involves a tedious trial-and-error process with one additional challenge: the evaluation of a network's robustness is significantly more expensive than its evaluation for clean accuracy. Thus, the aim of this paper is to facilitate better streamlined research on architectural design choices with respect to their impact on robustness as well as, for example, the evaluation of surrogate measures for robustness. We therefore borrow one of the most commonly considered search spaces for neural architecture search for image classification, NAS-Bench-201, which contains a manageable size of 6466 non-isomorphic network designs. We evaluate all these networks on a range of common adversarial attacks and corruption types and introduce a database on neural architecture design and robustness evaluations. We further present three exemplary use cases of this dataset, in which we (i) benchmark robustness measurements based on Jacobian and Hessian matrices for their robustness predictability, (ii) perform neural architecture search on robust accuracies, and (iii) provide an initial analysis of how architectural design choices affect robustness. We find that carefully crafting the topology of a network can have substantial impact on its robustness, where networks with the same parameter count range in mean adversarial robust accuracy from 20%-41%. Code and data is available at http://robustness.vision/.
https://openreview.net/pdf/5bcb50e805c33efcdf74c6d29f9b1a989b7f43b8.pdf
Does Deep Learning Learn to Abstract? A Systematic Probing Framework
https://openreview.net/forum?id=QB1dMPEXau5
https://openreview.net/forum?id=QB1dMPEXau5
Shengnan An,Zeqi Lin,Bei Chen,Qiang Fu,Nanning Zheng,Jian-Guang Lou
ICLR 2023,Poster
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context. At the same time, there is a lack of clear understanding about both the presence and further characteristics of this capability in deep learning models. In this paper, we introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective. A set of controlled experiments are conducted based on this framework, providing strong evidence that two probed pre-trained language models (PLMs), T5 and GPT2, have the abstraction capability. We also conduct in-depth analysis, thus shedding further light: (1) the whole training phase exhibits a "memorize-then-abstract" two-stage process; (2) the learned abstract concepts are gathered in a few middle-layer attention heads, rather than being evenly distributed throughout the model; (3) the probed abstraction capabilities exhibit robustness against concept mutations, and are more robust to low-level/source-side mutations than high-level/target-side ones; (4) generic pre-training is critical to the emergence of abstraction capability, and PLMs exhibit better abstraction with larger model sizes and data scales.
https://openreview.net/pdf/b2f9d04c3fb3ccc28534c9012cac99faec1f9aaf.pdf
Improving Out-of-distribution Generalization with Indirection Representations
https://openreview.net/forum?id=0f-0I6RFAch
https://openreview.net/forum?id=0f-0I6RFAch
Kha Pham,Hung Le,Man Ngo,Truyen Tran
ICLR 2023,Poster
We propose a generic module named Indirection Layer (InLay), which leverages indirection and data internal relationships to effectively construct symbolic indirect representations to improve out-of-distribution generalization capabilities of various neural architectures. InLay receives data input in the form of a sequence of objects, treats it as a complete weighted graph whose vertices are the objects and edge weights are scalars representing relationships between vertices. The input is first mapped via indirection to a symbolic graph with data-independent and trainable vertices. This symbolic graph is then propagated, resulting in new vertex features whose indirection will be used for prediction steps afterward. Theoretically, we show that the distances between indirection representations are bounded by the distances between corresponding graphs, implying that unseen samples with very different surface statistics can still be close in the representation space to the seen samples if they share similar internal relationships. We demonstrate that InLay is consistently effective in improving out-of-distribution generalization throughout a comprehensive suite of experiments, including IQ problems, distorted image classification, and few-shot domain adaptation NLP classification. We also conduct ablation studies to verify different design choices of InLay.
https://openreview.net/pdf/cdaaa1cced73d8e888cf3d024b9b09980756d6b4.pdf
Accelerating Guided Diffusion Sampling with Splitting Numerical Methods
https://openreview.net/forum?id=F0KTk2plQzO
https://openreview.net/forum?id=F0KTk2plQzO
Suttisak Wizadwongsa,Supasorn Suwajanakorn
ICLR 2023,Poster
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. However, one drawback of diffusion models, whether they are guided or unguided, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical high-order numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution.
https://openreview.net/pdf/b76de0f572f704575f927eccbbb6c0031f8b7427.pdf
Batch Multivalid Conformal Prediction
https://openreview.net/forum?id=Dk7QQp8jHEo
https://openreview.net/forum?id=Dk7QQp8jHEo
Christopher Jung,Georgy Noarov,Ramya Ramalingam,Aaron Roth
ICLR 2023,Poster
We develop fast distribution-free conformal prediction algorithms for obtaining multivalid coverage on exchangeable data in the batch setting. Multivalid coverage guarantees are stronger than marginal coverage guarantees in two ways: (1) They hold even conditional on group membership---that is, the target coverage level $1-\alpha$ holds conditionally on membership in each of an arbitrary (potentially intersecting) group in a finite collection $\mathcal{G}$ of regions in the feature space. (2) They hold even conditional on the value of the threshold used to produce the prediction set on a given example. In fact multivalid coverage guarantees hold even when conditioning on group membership and threshold value simultaneously. We give two algorithms: both take as input an arbitrary non-conformity score and an arbitrary collection of possibly intersecting groups $\mathcal{G}$, and then can equip arbitrary black-box predictors with prediction sets. Our first algorithm is a direct extension of quantile regression, needs to solve only a single convex minimization problem, and produces an estimator which has group-conditional guarantees for each group in $\mathcal{G}$. Our second algorithm is iterative, and gives the full guarantees of multivalid conformal prediction: prediction sets that are valid conditionally both on group membership and non-conformity threshold. We evaluate the performance of both of our algorithms in an extensive set of experiments.
https://openreview.net/pdf/4685cb8ed3d112f53b3d4e5aedbb51b40df234a1.pdf
Accurate Bayesian Meta-Learning by Accurate Task Posterior Inference
https://openreview.net/forum?id=sb-IkS8DQw2
https://openreview.net/forum?id=sb-IkS8DQw2
Michael Volpp,Philipp Dahlinger,Philipp Becker,Christian Daniel,Gerhard Neumann
ICLR 2023,Poster
Bayesian meta-learning (BML) enables fitting expressive generative models to small datasets by incorporating inductive priors learned from a set of related tasks. The Neural Process (NP) is a prominent deep neural network-based BML architecture, which has shown remarkable results in recent years. In its standard formulation, the NP encodes epistemic uncertainty in an amortized, factorized, Gaussian variational (VI) approximation to the BML task posterior (TP), using reparametrized gradients. Prior work studies a range of architectural modifications to boost performance, such as attentive computation paths or improved context aggregation schemes, while the influence of the VI scheme remains under-explored. We aim to bridge this gap by introducing GMM-NP, a novel BML model, which builds on recent work that enables highly accurate, full-covariance Gaussian mixture (GMM) TP approximations by combining VI with natural gradients and trust regions. We show that GMM-NP yields tighter evidence lower bounds, which increases the efficiency of marginal likelihood optimization, leading to improved epistemic uncertainty estimation and accuracy. GMM-NP does not require complex architectural modifications, resulting in a powerful, yet conceptually simple BML model, which outperforms the state of the art on a range of challenging experiments, highlighting its applicability to settings where data is scarce.
https://openreview.net/pdf/9d238d2b9fa5ae8f7a156a4a41514ec781bbbf91.pdf
Learning to Decompose Visual Features with Latent Textual Prompts
https://openreview.net/forum?id=wtcud6HroZr
https://openreview.net/forum?id=wtcud6HroZr
Feng Wang,Manling Li,Xudong Lin,Hairong Lv,Alex Schwing,Heng Ji
ICLR 2023,Poster
Recent advances in pre-training vision-language models like CLIP have shown great potential in learning transferable visual representations. Nonetheless, for downstream inference, CLIP-like models suffer from either 1) degraded accuracy and robustness in the case of inaccurate text descriptions during retrieval-based inference (the challenge for zero-shot protocol); or 2) breaking the well-established vision-language alignment (the challenge for linear probing). To address them, we propose Decomposed Feature Prompting (DeFo). DeFo leverages a flexible number of learnable embeddings as textual input while maintaining the vision-language dual-model architecture, which enables the model to learn decomposed visual features with the help of feature-level textual prompts. We further use an additional linear layer to perform classification, allowing a scalable size of language inputs. Our empirical study shows DeFo's significance in improving the vision-language models. For example, DeFo obtains 73.2% test accuracy on ImageNet with a ResNet-50 backbone without tuning any pretrained weights of both the vision and language encoder, outperforming zero-shot CLIP by a large margin of 15.0%, and outperforming state-of-the-art vision-language prompt tuning method by 7.6%.
https://openreview.net/pdf/113fd3d7efcd01c4d918daa9d8c3b5bfc746b4da.pdf
Context-enriched molecule representations improve few-shot drug discovery
https://openreview.net/forum?id=XrMWUuEevr
https://openreview.net/forum?id=XrMWUuEevr
Johannes Schimunek,Philipp Seidl,Lukas Friedrich,Daniel Kuhn,Friedrich Rippmann,Sepp Hochreiter,Günter Klambauer
ICLR 2023,Poster
A central task in computational drug discovery is to construct models from known active molecules to find further promising molecules for subsequent screening. However, typically only very few active molecules are known. Therefore, few-shot learning methods have the potential to improve the effectiveness of this critical phase of the drug discovery process. We introduce a new method for few-shot drug discovery. Its main idea is to enrich a molecule representation by knowledge about known context or reference molecules. Our novel concept for molecule representation enrichment is to associate molecules from both the support set and the query set with a large set of reference (context) molecules through a modern Hopfield network. Intuitively, this enrichment step is analogous to a human expert who would associate a given molecule with familiar molecules whose properties are known. The enrichment step reinforces and amplifies the covariance structure of the data, while simultaneously removing spurious correlations arising from the decoration of molecules. Our approach is compared with other few-shot methods for drug discovery on the FS-Mol benchmark dataset. On FS-Mol, our approach outperforms all compared methods and therefore sets a new state-of-the art for few-shot learning in drug discovery. An ablation study shows that the enrichment step of our method is the key to improve the predictive quality. In a domain shift experiment, we further demonstrate the robustness of our method.
https://openreview.net/pdf/9ce1c16c5ecb90ff5b8ff8cf01d769360755f9e5.pdf
Test-Time Adaptation via Self-Training with Nearest Neighbor Information
https://openreview.net/forum?id=EzLtB4M1SbM
https://openreview.net/forum?id=EzLtB4M1SbM
Minguk Jang,Sae-Young Chung,Hye Won Chung
ICLR 2023,Poster
Test-time adaptation (TTA) aims to adapt a trained classifier using online unlabeled test data only, without any information related to the training procedure. Most existing TTA methods adapt the trained classifier using the classifier's prediction on the test data as pseudo-label. However, under test-time domain shift, accuracy of the pseudo labels cannot be guaranteed, and thus the TTA methods often encounter performance degradation at the adapted classifier. To overcome this limitation, we propose a novel test-time adaptation method, called Test-time Adaptation via Self-Training with nearest neighbor information (TAST), which is composed of the following procedures: (1) adds trainable adaptation modules on top of the trained feature extractor; (2) newly defines a pseudo-label distribution for the test data by using the nearest neighbor information; (3) trains these modules only a few times during test time to match the nearest neighbor-based pseudo label distribution and a prototype-based class distribution for the test data; and (4) predicts the label of test data using the average predicted class distribution from these modules. The pseudo-label generation is based on the basic intuition that a test data and its nearest neighbor in the embedding space are likely to share the same label under the domain shift. By utilizing multiple randomly initialized adaptation modules, TAST extracts useful information for the classification of the test data under the domain shift, using the nearest neighbor information. TAST showed better performance than the state-of-the-art TTA methods on two standard benchmark tasks, domain generalization, namely VLCS, PACS, OfficeHome, and TerraIncognita, and image corruption, particularly CIFAR-10/100C.
https://openreview.net/pdf/0e2ac6f6559d51722e3cd94bb05d39893d9ccd39.pdf
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats
https://openreview.net/forum?id=yTbNYYcopd
https://openreview.net/forum?id=yTbNYYcopd
Brian Chmiel,Ron Banner,Elad Hoffer,Hilla Ben-Yaacov,Daniel Soudry
ICLR 2023,Poster
Quantization of the weights and activations is one of the main methods to reduce the computational footprint of Deep Neural Networks (DNNs) training. Current methods enable 4-bit quantization of the forward phase. However, this constitutes only a third of the training process. Reducing the computational footprint of the entire training process requires the quantization of the neural gradients, i.e., the loss gradients with respect to the outputs of intermediate neural layers. Previous works separately showed that accurate 4-bit quantization of the neural gradients needs to (1) be unbiased and (2) have a log scale. However, no previous work aimed to combine both ideas, as we do in this work. Specifically, we examine the importance of having unbiased quantization in quantized neural network training, where to maintain it, and how to combine it with logarithmic quantization. Based on this, we suggest a $\textit{logarithmic unbiased quantization}$ (LUQ) method to quantize all both the forward and backward phase to 4-bit, achieving state-of-the-art results in 4-bit training without overhead. For example, in ResNet50 on ImageNet, we achieved a degradation of 1.1 %. We further improve this to degradation of only 0.32 % after three epochs of high precision fine-tunining, combined with a variance reduction method---where both these methods add overhead comparable to previously suggested methods. A reference implementation is supplied in the supplementary material.
https://openreview.net/pdf/e5f2b958cc1c812deb41ca2474b7f0592c3f7603.pdf
Unsupervised Manifold Alignment with Joint Multidimensional Scaling
https://openreview.net/forum?id=lUpjsrKItz4
https://openreview.net/forum?id=lUpjsrKItz4
Dexiong Chen,Bowen Fan,Carlos Oliver,Karsten Borgwardt
ICLR 2023,Poster
We introduce Joint Multidimensional Scaling, a novel approach for unsupervised manifold alignment, which maps datasets from two different domains, without any known correspondences between data instances across the datasets, to a common low-dimensional Euclidean space. Our approach integrates Multidimensional Scaling (MDS) and Wasserstein Procrustes analysis into a joint optimization problem to simultaneously generate isometric embeddings of data and learn correspondences between instances from two different datasets, while only requiring intra-dataset pairwise dissimilarities as input. This unique characteristic makes our approach applicable to datasets without access to the input features, such as solving the inexact graph matching problem. We propose an alternating optimization scheme to solve the problem that can fully benefit from the optimization techniques for MDS and Wasserstein Procrustes. We demonstrate the effectiveness of our approach in several applications, including joint visualization of two datasets, unsupervised heterogeneous domain adaptation, graph matching, and protein structure alignment. The implementation of our work is available at https://github.com/BorgwardtLab/JointMDS.
https://openreview.net/pdf/f1e8bf06d06e0c512322b10870de591375cd9834.pdf
Simple and Scalable Nearest Neighbor Machine Translation
https://openreview.net/forum?id=uu1GBD9SlLe
https://openreview.net/forum?id=uu1GBD9SlLe
Yuhan Dai,Zhirui Zhang,Qiuzhi Liu,Qu Cui,Weihua Li,Yichao Du,Tong Xu
ICLR 2023,Poster
$k$NN-MT is a straightforward yet powerful approach for fast domain adaptation, which directly plugs the pre-trained neural machine translation (NMT) models with domain-specific token-level $k$-nearest-neighbor ($k$NN) retrieval to achieve domain adaptation without retraining. Despite being conceptually attractive, $k$NN-MT is burdened with massive storage requirements and high computational complexity since it conducts nearest neighbor searches over the entire reference corpus. In this paper, we propose a simple and scalable nearest neighbor machine translation framework to drastically promote the decoding and storage efficiency of $k$NN-based models while maintaining the translation performance. To this end, we dynamically construct a extremely small datastore for each input via sentence-level retrieval to avoid searching the entire datastore in vanilla $k$NN-MT, based on which we further introduce a distance-aware adapter to adaptively incorporate the $k$NN retrieval results into the pre-trained NMT models. Experiments on machine translation in two general settings, static domain adaptation, and online learning, demonstrate that our proposed approach not only achieves almost 90% speed as the NMT model without performance degradation, but also significantly reduces the storage requirements of $k$NN-MT.
https://openreview.net/pdf/c0c7e50317d3aceab2f2ae83475a0c072040a91d.pdf
On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning.
https://openreview.net/forum?id=v8JIQdiN9Sh
https://openreview.net/forum?id=v8JIQdiN9Sh
Jianhong Bai,Zuozhu Liu,Hualiang Wang,Jin Hao,YANG FENG,Huanpeng Chu,Haoji Hu
ICLR 2023,Poster
Though Self-supervised learning (SSL) has been widely studied as a promising technique for representation learning, it doesn't generalize well on long-tailed datasets due to the majority classes dominating the feature space. Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect. In this paper, we propose an alternative but easy-to-use and effective solution, \textbf{C}ontrastive with \textbf{O}ut-of-distribution (OOD) data for \textbf{L}ong-\textbf{T}ail learning (COLT), which can effectively exploit OOD data to dynamically re-balance the feature space. We empirically identify the counter-intuitive usefulness of OOD samples in SSL long-tailed learning and principally design a novel SSL method. Concretely, we first localize the `\emph{head}' and `\emph{tail}' samples by assigning a tailness score to each OOD sample based on its neighborhoods in the feature space. Then, we propose an online OOD sampling strategy to dynamically re-balance the feature space. Finally, we enforce the model to be capable of distinguishing ID and OOD samples by a distribution-level supervised contrastive loss. Extensive experiments are conducted on various datasets and several state-of-the-art SSL frameworks to verify the effectiveness of the proposed method. The results show that our method significantly improves the performance of SSL on long-tailed datasets by a large margin, and even outperforms previous work which uses external ID data. Our code is available at \url{https://github.com/JianhongBai/COLT}.
https://openreview.net/pdf/fe3d1728386c7084b9aaf9b51488e79d39cc9c4f.pdf
Dynamic Update-to-Data Ratio: Minimizing World Model Overfitting
https://openreview.net/forum?id=ZIkHSXzd9O7
https://openreview.net/forum?id=ZIkHSXzd9O7
Nicolai Dorka,Tim Welschehold,Wolfram Burgard
ICLR 2023,Poster
Early stopping based on the validation set performance is a popular approach to find the right balance between under- and overfitting in the context of supervised learning. However, in reinforcement learning, even for supervised sub-problems such as world model learning, early stopping is not applicable as the dataset is continually evolving. As a solution, we propose a new general method that dynamically adjusts the update to data (UTD) ratio during training based on under- and overfitting detection on a small subset of the continuously collected experience not used for training. We apply our method to DreamerV2, a state-of-the-art model-based reinforcement learning algorithm, and evaluate it on the DeepMind Control Suite and the Atari 100k benchmark. The results demonstrate that one can better balance under- and overestimation by adjusting the UTD ratio with our approach compared to the default setting in DreamerV2 and that it is competitive with an extensive hyperparameter search which is not feasible for many applications. Our method eliminates the need to set the UTD hyperparameter by hand and even leads to a higher robustness with regard to other learning-related hyperparameters further reducing the amount of necessary tuning.
https://openreview.net/pdf/148ac4c92f161ae4747fc29e9fbe9b7170b7b947.pdf
Uni-Mol: A Universal 3D Molecular Representation Learning Framework
https://openreview.net/forum?id=6K2RM6wVqKu
https://openreview.net/forum?id=6K2RM6wVqKu
Gengmo Zhou,Zhifeng Gao,Qiankun Ding,Hang Zheng,Hongteng Xu,Zhewei Wei,Linfeng Zhang,Guolin Ke
ICLR 2023,Poster
Molecular representation learning (MRL) has gained tremendous attention due to its critical role in learning from limited supervised data for applications like drug design. In most MRL methods, molecules are treated as 1D sequential tokens or 2D topology graphs, limiting their ability to incorporate 3D information for downstream tasks and, in particular, making it almost impossible for 3D geometry prediction/generation. In this paper, we propose a universal 3D MRL framework, called Uni-Mol, that significantly enlarges the representation ability and application scope of MRL schemes. Uni-Mol contains two pretrained models with the same SE(3) Transformer architecture: a molecular model pretrained by 209M molecular conformations; a pocket model pretrained by 3M candidate protein pocket data. Besides, Uni-Mol contains several finetuning strategies to apply the pretrained models to various downstream tasks. By properly incorporating 3D information, Uni-Mol outperforms SOTA in 14/15 molecular property prediction tasks. Moreover, Uni-Mol achieves superior performance in 3D spatial tasks, including protein-ligand binding pose prediction, molecular conformation generation, etc. The code, model, and data are made publicly available at https://github.com/dptech-corp/Uni-Mol.
https://openreview.net/pdf/780538c1af2025ccd4b712b4da07ff67b7bcb2fc.pdf
Learning with Auxiliary Activation for Memory-Efficient Training
https://openreview.net/forum?id=YgC62m4CY3r
https://openreview.net/forum?id=YgC62m4CY3r
Sunghyeon Woo,Dongsuk Jeon
ICLR 2023,Poster
While deep learning has achieved great success in various fields, a large amount of memory is necessary to train deep neural networks, which hinders the development of massive state-of-the-art models. The reason is the conventional learning rule, backpropagation, should temporarily store input activations of all the layers in the network. To overcome this, recent studies suggested various memory-efficient implementations of backpropagation. However, those approaches incur computational overhead due to the recomputation of activations, slowing down neural network training. In this work, we propose a new learning rule which significantly reduces memory requirements while closely matching the performance of backpropagation. The algorithm combines auxiliary activation with output activation during forward propagation, while only auxiliary activation is used during backward propagation instead of actual input activation to reduce the amount of data to be temporarily stored. We mathematically show that our learning rule can reliably train the networks whose loss landscape is convex if the auxiliary activation satisfies certain conditions. Based on this observation, we suggest candidates of auxiliary activation that satisfy those conditions. Experimental results confirm that the proposed learning rule achieves competitive performance compared to backpropagation in various models such as ResNet, Transformer, BERT, ViT, and MLP-Mixer.
https://openreview.net/pdf/1c6f3fd1ba354ed7bd43032e4a12fb61ed2598b0.pdf
Massively Scaling Heteroscedastic Classifiers
https://openreview.net/forum?id=sIoED-yPK9l
https://openreview.net/forum?id=sIoED-yPK9l
Mark Collier,Rodolphe Jenatton,Basil Mustafa,Neil Houlsby,Jesse Berent,Effrosyni Kokiopoulou
ICLR 2023,Poster
Heteroscedastic classifiers, which learn a multivariate Gaussian distribution over prediction logits, have been shown to perform well on image classification problems with hundreds to thousands of classes. However, compared to standard classifiers, they introduce extra parameters that scale linearly with the number of classes. This makes them infeasible to apply to larger-scale problems. In addition heteroscedastic classifiers introduce a critical temperature hyperparameter which must be tuned. We propose HET-XL, a heteroscedastic classifier whose parameter count when compared to a standard classifier scales independently of the number of classes. In our large-scale settings, we show that we can remove the need to tune the temperature hyperparameter, by directly learning it on the training data. On large image classification datasets with up to 4B images and 30k classes our method requires 14X fewer additional parameters, does not require tuning the temperature on a held-out set and performs consistently better than the baseline heteroscedastic classifier. HET-XL improves ImageNet 0-shot classification in a multimodal contrastive learning setup which can be viewed as a 3.5 billion class classification problem.
https://openreview.net/pdf/7be14f2821d211766625240b0311bd47ece2f5d8.pdf
KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Low-Resource NLP
https://openreview.net/forum?id=2nocgE1m0A
https://openreview.net/forum?id=2nocgE1m0A
Yufei Wang,Jiayi Zheng,Can Xu,Xiubo Geng,Tao Shen,Chongyang Tao,Daxin Jiang
ICLR 2023,Poster
This paper focuses on data augmentation for low-resource NLP tasks where the training set is limited. The existing solutions either leverage task-independent heuristic rules (e.g., Synonym Replacement) or fine-tune general-purpose pre-trained language models (e.g., GPT2) using the limited training instances to produce new synthetic data. Consequently, they have trivial task-specific knowledge and are limited to yielding low-quality synthetic data. To combat this issue, we propose Knowledge Mixture Data Augmentation Model (KnowDA), a Seq2Seq language model pretrained on a mixture of diverse NLP tasks under a novel framework of Knowledge Mixture Training (KoMT). The goal of KoMT is to condense diverse NLP task-specific knowledge into the single KnowDA model (i.e., all-in-one). The resulting KnowDA could utilize these knowledge to quickly grasp the inherent synthesis law of the target task through limited training instances. Specifically, KoMT reformulates input examples from various heterogeneous NLP tasks into a unified text-to-text format and employs denoising training objectives in different granularity to learn to reconstruct partial or complete samples. To the best of our knowledge, we are the first to attempt to apply 100+ NLP multi-task training for data augmentation. Extensive experiments show that i) the synthetic data produced by KnowDA successfully improves the performance of the strong pre-trained language models (i.e., Bert, ALBert and Deberta) by a large margin on the low-resource NLP benchmark FewGLUE, CoNLL’03 and WikiAnn; ii) KnowDA successful transfer the task knowledge to NLP tasks whose types are seen and unseen in KoMT.
https://openreview.net/pdf/92d40698b0d55d0fbb40c50c4921bf1bb8cd40aa.pdf