abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/xu23f.html
https://proceedings.mlr.press/v202/xu23f/xu23f.pdf
https://openreview.net/forum?id=42BJcoGuMP
Hierarchical Neural Coding for Controllable CAD Model Generation
https://proceedings.mlr.press/v202/xu23f.html
Xiang Xu, Pradeep Kumar Jayaraman, Joseph George Lambourne, Karl D.D. Willis, Yasutaka Furukawa
https://proceedings.mlr.press/v202/xu23f.html
ICML 2023
This paper presents a novel generative model for Computer Aided Design (CAD) that 1) represents high-level design concepts of a CAD model as a three-level hierarchical tree of neural codes, from global part arrangement down to local curve geometry; and 2) controls the generation or completion of CAD models by specifying the target design using a code tree. Concretely, a novel variant of a vector quantized VAE with "masked skip connection" extracts design variations as neural codebooks at three levels. Two-stage cascaded auto-regressive transformers learn to generate code trees from incomplete CAD models and then complete CAD models following the intended design. Extensive experiments demonstrate superior performance on conventional tasks such as unconditional generation while enabling novel interaction capabilities on conditional generation tasks. The code is available at https://github.com/samxuxiang/hnc-cad.
https://proceedings.mlr.press/v202/xu23g.html
https://proceedings.mlr.press/v202/xu23g/xu23g.pdf
https://openreview.net/forum?id=WGh9xjJ0z8
Efficient Sequence Transduction by Jointly Predicting Tokens and Durations
https://proceedings.mlr.press/v202/xu23g.html
Hainan Xu, Fei Jia, Somshubra Majumdar, He Huang, Shinji Watanabe, Boris Ginsburg
https://proceedings.mlr.press/v202/xu23g.html
ICML 2023
This paper introduces a novel Token-and-Duration Transducer (TDT) architecture for sequence-to-sequence tasks. TDT extends conventional RNN-Transducer architectures by jointly predicting both a token and its duration, i.e. the number of input frames covered by the emitted token. This is achieved by using a joint network with two outputs which are independently normalized to generate distributions over tokens and durations. During inference, TDT models can skip input frames guided by the predicted duration output, which makes them significantly faster than conventional Transducers which process the encoder output frame by frame. TDT models achieve both better accuracy and significantly faster inference than conventional Transducers on different sequence transduction tasks. TDT models for Speech Recognition achieve better accuracy and up to 2.82X faster inference than conventional Transducers. TDT models for Speech Translation achieve an absolute gain of over 1 BLEU on the MUST-C test compared with conventional Transducers, and its inference is 2.27X faster. In Speech Intent Classification and Slot Filling tasks, TDT models improve the intent accuracy by up to over 1% (absolute) over conventional Transducers, while running up to 1.28X faster. Our implementation of the TDT model will be open-sourced with the NeMo (https://github.com/NVIDIA/NeMo) toolkit.
https://proceedings.mlr.press/v202/xu23h.html
https://proceedings.mlr.press/v202/xu23h/xu23h.pdf
https://openreview.net/forum?id=lK1B1yl289
Constrained Efficient Global Optimization of Expensive Black-box Functions
https://proceedings.mlr.press/v202/xu23h.html
Wenjie Xu, Yuning Jiang, Bratislav Svetozarevic, Colin Jones
https://proceedings.mlr.press/v202/xu23h.html
ICML 2023
We study the problem of constrained efficient global optimization, where both the objective and constraints are expensive black-box functions that can be learned with Gaussian processes. We propose CONFIG (CONstrained efFIcient Global Optimization), a simple and effective algorithm to solve it. Under certain regularity assumptions, we show that our algorithm enjoys the same cumulative regret bound as that in the unconstrained case and similar cumulative constraint violation upper bounds. For commonly used Matern and Squared Exponential kernels, our bounds are sublinear and allow us to derive a convergence rate to the optimal solution of the original constrained problem. In addition, our method naturally provides a scheme to declare infeasibility when the original black-box optimization problem is infeasible. Numerical experiments on sampled instances from the Gaussian process, artificial numerical problems, and a black-box building controller tuning problem all demonstrate the competitive performance of our algorithm. Compared to the other state-of-the-art methods, our algorithm significantly improves the theoretical guarantees while achieving competitive empirical performance.
https://proceedings.mlr.press/v202/xu23i.html
https://proceedings.mlr.press/v202/xu23i/xu23i.pdf
https://openreview.net/forum?id=d28UZYETzI
Pareto Regret Analyses in Multi-objective Multi-armed Bandit
https://proceedings.mlr.press/v202/xu23i.html
Mengfan Xu, Diego Klabjan
https://proceedings.mlr.press/v202/xu23i.html
ICML 2023
We study Pareto optimality in multi-objective multi-armed bandit by providing a formulation of adversarial multi-objective multi-armed bandit and defining its Pareto regrets that can be applied to both stochastic and adversarial settings. The regrets do not rely on any scalarization functions and reflect Pareto optimality compared to scalarized regrets. We also present new algorithms assuming both with and without prior information of the multi-objective multi-armed bandit setting. The algorithms are shown optimal in adversarial settings and nearly optimal up to a logarithmic factor in stochastic settings simultaneously by our established upper bounds and lower bounds on Pareto regrets. Moreover, the lower bound analyses show that the new regrets are consistent with the existing Pareto regret for stochastic settings and extend an adversarial attack mechanism from bandit to the multi-objective one.
https://proceedings.mlr.press/v202/xu23j.html
https://proceedings.mlr.press/v202/xu23j/xu23j.pdf
https://openreview.net/forum?id=6eGltW7t8F
Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference
https://proceedings.mlr.press/v202/xu23j.html
Yan Xu, Deqian Kong, Dehong Xu, Ziwei Ji, Bo Pang, Pascale Fung, Ying Nian Wu
https://proceedings.mlr.press/v202/xu23j.html
ICML 2023
The capability to generate responses with diversity and faithfulness using factual knowledge is paramount for creating a human-like, trustworthy dialogue system. Common strategies either adopt a two-step paradigm, which optimizes knowledge selection and response generation separately, and may overlook the inherent correlation between these two tasks, or leverage conditional variational method to jointly optimize knowledge selection and response generation by employing an inference network. In this paper, we present an end-to-end learning framework, termed Sequential Posterior Inference (SPI), capable of selecting knowledge and generating dialogues by approximately sampling from the posterior distribution. Unlike other methods, SPI does not require the inference network or assume a simple geometry of the posterior distribution. This straightforward and intuitive inference procedure of SPI directly queries the response generation model, allowing for accurate knowledge selection and generation of faithful responses. In addition to modeling contributions, our experimental results on two common dialogue datasets (Wizard of Wikipedia and Holl-E) demonstrate that SPI outperforms previous strong baselines according to both automatic and human evaluation metrics.
https://proceedings.mlr.press/v202/xu23k.html
https://proceedings.mlr.press/v202/xu23k/xu23k.pdf
https://openreview.net/forum?id=PSeePcY7WR
Quantifying the Variability Collapse of Neural Networks
https://proceedings.mlr.press/v202/xu23k.html
Jing Xu, Haoxiong Liu
https://proceedings.mlr.press/v202/xu23k.html
ICML 2023
Recent studies empirically demonstrate the positive relationship between the transferability of neural networks and the in-class variation of the last layer features. The recently discovered Neural Collapse (NC) phenomenon provides a new perspective of understanding such last layer geometry of neural networks. In this paper, we propose a novel metric, named Variability Collapse Index (VCI), to quantify the variability collapse phenomenon in the NC paradigm. The VCI metric is well-motivated and intrinsically related to the linear probing loss on the last layer features. Moreover, it enjoys desired theoretical and empirical properties, including invariance under invertible linear transformations and numerical stability, that distinguishes it from previous metrics. Our experiments verify that VCI is indicative of the variability collapse and the transferability of pretrained neural networks.
https://proceedings.mlr.press/v202/xu23l.html
https://proceedings.mlr.press/v202/xu23l/xu23l.pdf
https://openreview.net/forum?id=inClAaZKvc
Progressive Purification for Instance-Dependent Partial Label Learning
https://proceedings.mlr.press/v202/xu23l.html
Ning Xu, Biao Liu, Jiaqi Lv, Congyu Qiao, Xin Geng
https://proceedings.mlr.press/v202/xu23l.html
ICML 2023
Partial label learning (PLL) aims to train multiclass classifiers from the examples each annotated with a set of candidate labels where a fixed but unknown candidate label is correct. In the last few years, the instance-independent generation process of candidate labels has been extensively studied, on the basis of which many theoretical advances have been made in PLL. Nevertheless, the candidate labels are always instance-dependent in practice and there is no theoretical guarantee that the model trained on the instance-dependent PLL examples can converge to an ideal one. In this paper, a theoretically grounded and practically effective approach named POP, i.e. PrOgressive Purification for instance-dependent partial label learning, is proposed. Specifically, POP updates the learning model and purifies each candidate label set progressively in every epoch. Theoretically, we prove that POP enlarges the region appropriately fast where the model is reliable, and eventually approximates the Bayes optimal classifier with mild assumptions. Technically, POP is flexible with arbitrary PLL losses and could improve the performance of the previous PLL losses in the instance-dependent case. Experiments on the benchmark datasets and the real-world datasets validate the effectiveness of the proposed method.
https://proceedings.mlr.press/v202/xu23m.html
https://proceedings.mlr.press/v202/xu23m/xu23m.pdf
https://openreview.net/forum?id=wmgyO9RZhy
PFGM++: Unlocking the Potential of Physics-Inspired Generative Models
https://proceedings.mlr.press/v202/xu23m.html
Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, Tommi Jaakkola
https://proceedings.mlr.press/v202/xu23m.html
ICML 2023
We introduce a new family of physics-inspired generative models termed PFGM++ that unifies diffusion models and Poisson Flow Generative Models (PFGM). These models realize generative trajectories for N dimensional data by embedding paths in N+D dimensional space while still controlling the progression with a simple scalar norm of the D additional variables. The new models reduce to PFGM when D=1 and to diffusion models when D$\to\infty$. The flexibility of choosing D allows us to trade off robustness against rigidity as increasing D results in more concentrated coupling between the data and the additional variable norms. We dispense with the biased large batch field targets used in PFGM and instead provide an unbiased perturbation-based objective similar to diffusion models. To explore different choices of D, we provide a direct alignment method for transferring well-tuned hyperparameters from diffusion models (D$\to\infty$) to any finite D values. Our experiments show that models with finite D can be superior to previous state-of-the-art diffusion models on CIFAR-10/FFHQ 64$\times$64 datasets/LSUN Churches 256$\times$256, with median Ds. In class-conditional setting, D=2048 yields current state-of-the-art FID of 1.74 on CIFAR-10 without additional training. Furthermore, we demonstrate that models with smaller $D$ exhibit improved robustness against modeling errors. Code is available at https://github.com/Newbeeer/pfgmpp
https://proceedings.mlr.press/v202/xu23n.html
https://proceedings.mlr.press/v202/xu23n/xu23n.pdf
https://openreview.net/forum?id=sLfHWWrfe2
Geometric Latent Diffusion Models for 3D Molecule Generation
https://proceedings.mlr.press/v202/xu23n.html
Minkai Xu, Alexander S Powers, Ron O. Dror, Stefano Ermon, Jure Leskovec
https://proceedings.mlr.press/v202/xu23n.html
ICML 2023
Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries and advancing foundational science problems such as molecule design. Inspired by the recent huge success of Stable (latent) Diffusion models, we propose a novel and principled method for 3D molecule generation named Geometric Latent Diffusion Models (GeoLDM). GeoLDM is the first latent DM model for the molecular geometry domain, composed of autoencoders encoding structures into continuous latent codes and DMs operating in the latent space. Our key innovation is that for modeling the 3D molecular geometries, we capture its critical roto-translational equivariance constraints by building a point-structured latent space with both invariant scalars and equivariant tensors. Extensive experiments demonstrate that GeoLDM can consistently achieve better performance on multiple molecule generation benchmarks, with up to 7% improvement for the valid percentage of large biomolecules. Results also demonstrate GeoLDM’s higher capacity for controllable generation thanks to the latent modeling. Code is provided at https://github.com/MinkaiXu/GeoLDM.
https://proceedings.mlr.press/v202/xu23o.html
https://proceedings.mlr.press/v202/xu23o/xu23o.pdf
https://openreview.net/forum?id=CN5J0UGZYg
The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing
https://proceedings.mlr.press/v202/xu23o.html
Xingyu Xu, Yandi Shen, Yuejie Chi, Cong Ma
https://proceedings.mlr.press/v202/xu23o.html
ICML 2023
We propose $\textsf{ScaledGD($\lambda$)}$, a preconditioned gradient descent method to tackle the low-rank matrix sensing problem when the true rank is unknown, and when the matrix is possibly ill-conditioned. Using overparametrized factor representations, $\textsf{ScaledGD($\lambda$)}$ starts from a small random initialization, and proceeds by gradient descent with a specific form of preconditioning with a fixed damping term to combat overparameterization. At the expense of light computational overhead incurred by preconditioners, $\textsf{ScaledGD($\lambda$)}$ is remarkably robust to ill-conditioning compared to vanilla gradient descent ($\mathsf{GD}$). Specifically, we show that, under the Gaussian design, $\textsf{ScaledGD($\lambda$)}$ converges to the true low-rank matrix at a constant linear rate that is independent of the condition number (apart from a short nearly dimension-free burdening period), with near-optimal sample complexity. This significantly improves upon the convergence rate of vanilla $\mathsf{GD}$ which suffers from a polynomial dependency with the condition number. Our work provides evidence on the power of preconditioning in accelerating the convergence without hurting generalization in overparameterized learning.
https://proceedings.mlr.press/v202/xu23p.html
https://proceedings.mlr.press/v202/xu23p/xu23p.pdf
https://openreview.net/forum?id=V6PNBRWRil
Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly Detection with Scale Learning
https://proceedings.mlr.press/v202/xu23p.html
Hongzuo Xu, Yijie Wang, Juhui Wei, Songlei Jian, Yizhou Li, Ning Liu
https://proceedings.mlr.press/v202/xu23p.html
ICML 2023
Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals. Different from current reconstruction-guided generative models and transformation-based contrastive models, we devise novel data-driven supervision for tabular data by introducing a characteristic – scale – as data labels. By representing varied sub-vectors of data instances, we define scale as the relationship between the dimensionality of original sub-vectors and that of representations. Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training. This paper further proposes a scale learning-based anomaly detection method. Supervised by the learning objective of scale distribution alignment, our approach learns the ranking of representations converted from varied subspaces of each data instance. Through this proxy task, our approach models inherent regularities and patterns within data, which well describes data "normality". Abnormal degrees of testing instances are obtained by measuring whether they fit these learned patterns. Extensive experiments show that our approach leads to significant improvement over state-of-the-art generative/contrastive anomaly detection methods.
https://proceedings.mlr.press/v202/xu23q.html
https://proceedings.mlr.press/v202/xu23q/xu23q.pdf
https://openreview.net/forum?id=KQaMjvlUE6
Competing for Shareable Arms in Multi-Player Multi-Armed Bandits
https://proceedings.mlr.press/v202/xu23q.html
Renzhe Xu, Haotian Wang, Xingxuan Zhang, Bo Li, Peng Cui
https://proceedings.mlr.press/v202/xu23q.html
ICML 2023
Competitions for shareable and limited resources have long been studied with strategic agents. In reality, agents often have to learn and maximize the rewards of the resources at the same time. To design an individualized competing policy, we model the competition between agents in a novel multi-player multi-armed bandit (MPMAB) setting where players are selfish and aim to maximize their own rewards. In addition, when several players pull the same arm, we assume that these players averagely share the arms’ rewards by expectation. Under this setting, we first analyze the Nash equilibrium when arms’ rewards are known. Subsequently, we propose a novel Selfish MPMAB with Averaging Allocation (SMAA) approach based on the equilibrium. We theoretically demonstrate that SMAA could achieve a good regret guarantee for each player when all players follow the algorithm. Additionally, we establish that no single selfish player can significantly increase their rewards through deviation, nor can they detrimentally affect other players’ rewards without incurring substantial losses for themselves. We finally validate the effectiveness of the method in extensive synthetic experiments.
https://proceedings.mlr.press/v202/xu23r.html
https://proceedings.mlr.press/v202/xu23r/xu23r.pdf
https://openreview.net/forum?id=jJeY7w8YRz
Sequential Predictive Conformal Inference for Time Series
https://proceedings.mlr.press/v202/xu23r.html
Chen Xu, Yao Xie
https://proceedings.mlr.press/v202/xu23r.html
ICML 2023
We present a new distribution-free conformal prediction algorithm for sequential data (e.g., time series), called the sequential predictive conformal inference (SPCI). We specifically account for the nature that time series data are non-exchangeable, and thus many existing conformal prediction algorithms are not applicable. The main idea is to adaptively re-estimate the conditional quantile of non-conformity scores (e.g., prediction residuals), upon exploiting the temporal dependence among them. More precisely, we cast the problem of conformal prediction interval as predicting the quantile of a future residual, given a user-specified point prediction algorithm. Theoretically, we establish asymptotic valid conditional coverage upon extending consistency analyses in quantile regression. Using simulation and real-data experiments, we demonstrate a significant reduction in interval width of SPCI compared to other existing methods under the desired empirical coverage.
https://proceedings.mlr.press/v202/xu23s.html
https://proceedings.mlr.press/v202/xu23s/xu23s.pdf
https://openreview.net/forum?id=DSOmy0ScK6
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
https://proceedings.mlr.press/v202/xu23s.html
Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, Guohai Xu, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou
https://proceedings.mlr.press/v202/xu23s.html
ICML 2023
Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. Code and models will be released in https://github.com/X-PLUG/mPLUG-2.
https://proceedings.mlr.press/v202/xu23t.html
https://proceedings.mlr.press/v202/xu23t/xu23t.pdf
https://openreview.net/forum?id=ZOOwHgxfR4
ProtST: Multi-Modality Learning of Protein Sequences and Biomedical Texts
https://proceedings.mlr.press/v202/xu23t.html
Minghao Xu, Xinyu Yuan, Santiago Miret, Jian Tang
https://proceedings.mlr.press/v202/xu23t.html
ICML 2023
Current protein language models (PLMs) learn protein representations mainly based on their sequences, thereby well capturing co-evolutionary information, but they are unable to explicitly acquire protein functions, which is the end goal of protein representation learning. Fortunately, for many proteins, their textual property descriptions are available, where their various functions are also described. Motivated by this fact, we first build the ProtDescribe dataset to augment protein sequences with text descriptions of their functions and other important properties. Based on this dataset, we propose the ProtST framework to enhance Protein Sequence pre-training and understanding by biomedical Texts. During pre-training, we design three types of tasks, i.e., unimodal mask prediction, multimodal representation alignment and multimodal mask prediction, to enhance a PLM with protein property information with different granularities and, at the same time, preserve the PLM’s original representation power. On downstream tasks, ProtST enables both supervised learning and zero-shot prediction. We verify the superiority of ProtST-induced PLMs over previous ones on diverse representation learning benchmarks. Under the zero-shot setting, we show the effectiveness of ProtST on zero-shot protein classification, and ProtST also enables functional protein retrieval from a large-scale database without any function annotation.
https://proceedings.mlr.press/v202/xu23u.html
https://proceedings.mlr.press/v202/xu23u/xu23u.pdf
https://openreview.net/forum?id=tRhQsHnoFw
Bayesian Design Principles for Frequentist Sequential Learning
https://proceedings.mlr.press/v202/xu23u.html
Yunbei Xu, Assaf Zeevi
https://proceedings.mlr.press/v202/xu23u.html
ICML 2023
We develop a general theory to optimize the frequentist regret for sequential learning problems, where efficient bandit and reinforcement learning algorithms can be derived from unified Bayesian principles. We propose a novel optimization approach to create "algorithmic beliefs" at each round, and use Bayesian posteriors to make decisions. This is the first approach to make Bayesian-type algorithms prior-free and applicable to adversarial settings, in a generic and optimal manner. Moreover, the algorithms are simple and often efficient to implement. As a major application, we present a novel algorithm for multi-armed bandits that achieves the "best-of-all-worlds" empirical performance in the stochastic, adversarial, and non-stationary environments. And we illustrate how these principles can be used in linear bandits, convex bandits, and reinforcement learning.
https://proceedings.mlr.press/v202/xu23v.html
https://proceedings.mlr.press/v202/xu23v/xu23v.pdf
https://openreview.net/forum?id=cMmjBH5LqW
SLAMB: Accelerated Large Batch Training with Sparse Communication
https://proceedings.mlr.press/v202/xu23v.html
Hang Xu, Wenxuan Zhang, Jiawei Fei, Yuzhe Wu, Tingwen Xie, Jun Huang, Yuchen Xie, Mohamed Elhoseiny, Panos Kalnis
https://proceedings.mlr.press/v202/xu23v.html
ICML 2023
Distributed training of large deep neural networks requires frequent exchange of massive data between machines, thus communication efficiency is a major concern. Existing compressed communication methods are either not compatible with large batch optimization algorithms, or do not provide sufficient speedup in large scale. In this paper, we combine sparsification-based gradient compression with the layer-wise adaptive moments optimizer for large batch training (LAMB). We propose SLAMB, a novel communication-efficient optimizer that supports large batch sizes and scales to thousands of GPUs. SLAMB employs momentum masking, local error compensation, and element-wise adaptive rescaling to achieve accurate layer-wise weight updates, which translates to fast convergence for very large batches. Our empirical results show that, compared to the state-of-the-art, SLAMB transmits half the amount of data in large-batch BERT pre-training, without sacrificing accuracy. Moreover, SLAMB achieves excellent scalability in large computing infrastructures. For instance, SLAMB with 128 GPUs reduces the training time of Swin Transformer pre-training on ImageNet to 5.35 hours, which is 2 hours faster than the state-of-the-art. At the extreme, we trained BERT-XL (2.8B parameters) on 1,024 NVIDIA A100 GPUs, where SLAMB achieved 90% scaling efficiency.
https://proceedings.mlr.press/v202/xu23w.html
https://proceedings.mlr.press/v202/xu23w/xu23w.pdf
https://openreview.net/forum?id=rNLHeKckZc
Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks
https://proceedings.mlr.press/v202/xu23w.html
Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang, Bei Yu
https://proceedings.mlr.press/v202/xu23w.html
ICML 2023
Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures. However, these methods inherit issues from the conventional NAS methods, such as high computational cost and optimization difficulty. More importantly, previous NAS methods have ignored the uniqueness of GNNs, where GNNs possess expressive power without training. With the randomly-initialized weights, we can then seek the optimal architecture parameters via the sparse coding objective and derive a novel NAS-GNNs method, namely neural architecture coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can efficiently compute in linear time. Empirical evaluations on multiple GNN benchmark datasets demonstrate that our approach leads to state-of-the-art performance, which is up to $200\times$ faster and $18.8%$ more accurate than the strong baselines.
https://proceedings.mlr.press/v202/xu23x.html
https://proceedings.mlr.press/v202/xu23x/xu23x.pdf
https://openreview.net/forum?id=ZVRWKr3ApD
An Instrumental Variable Approach to Confounded Off-Policy Evaluation
https://proceedings.mlr.press/v202/xu23x.html
Yang Xu, Jin Zhu, Chengchun Shi, Shikai Luo, Rui Song
https://proceedings.mlr.press/v202/xu23x.html
ICML 2023
Off-policy evaluation (OPE) aims to estimate the return of a target policy using some pre-collected observational data generated by a potentially different behavior policy. In many cases, there exist unmeasured variables that confound the action-reward or action-next-state relationships, rendering many existing OPE approaches ineffective. This paper develops an instrumental variable (IV)-based method for consistent OPE in confounded sequential decision making. Similar to single-stage decision making, we show that IV enables us to correctly identify the target policy’s value in infinite horizon settings as well. Furthermore, we propose a number of policy value estimators and illustrate their effectiveness through extensive simulations and real data analysis from a world-leading short-video platform.
https://proceedings.mlr.press/v202/xue23a.html
https://proceedings.mlr.press/v202/xue23a/xue23a.pdf
https://openreview.net/forum?id=t4CrIEyukh
Near-Optimal Quantum Coreset Construction Algorithms for Clustering
https://proceedings.mlr.press/v202/xue23a.html
Yecheng Xue, Xiaoyu Chen, Tongyang Li, Shaofeng H.-C. Jiang
https://proceedings.mlr.press/v202/xue23a.html
ICML 2023
$k$-Clustering in $\mathbb{R}^d$ (e.g., $k$-median and $k$-means) is a fundamental machine learning problem. While near-linear time approximation algorithms were known in the classical setting for a dataset with cardinality $n$, it remains open to find sublinear-time quantum algorithms. We give quantum algorithms that find coresets for $k$-clustering in $\mathbb{R}^d$ with $\tilde{O}(\sqrt{nk}d^{3/2})$ query complexity. Our coreset reduces the input size from $n$ to $\mathrm{poly}(k\epsilon^{-1}d)$, so that existing $\alpha$-approximation algorithms for clustering can run on top of it and yield $(1 + \epsilon)\alpha$-approximation. This eventually yields a quadratic speedup for various $k$-clustering approximation algorithms. We complement our algorithm with a nearly matching lower bound, that any quantum algorithm must make $\Omega(\sqrt{nk})$ queries in order to achieve even $O(1)$-approximation for $k$-clustering.
https://proceedings.mlr.press/v202/xue23b.html
https://proceedings.mlr.press/v202/xue23b/xue23b.pdf
https://openreview.net/forum?id=qaWSjkLPuw
A Study on Transformer Configuration and Training Objective
https://proceedings.mlr.press/v202/xue23b.html
Fuzhao Xue, Jianghai Chen, Aixin Sun, Xiaozhe Ren, Zangwei Zheng, Xiaoxin He, Yongming Chen, Xin Jiang, Yang You
https://proceedings.mlr.press/v202/xue23b.html
ICML 2023
Transformer-based models have delivered impressive results on many tasks, particularly vision and language tasks. In many model training situations, conventional configurations are often adopted. For example, we usually set the base model with hidden size (i.e. model width) to be 768 and the number of transformer layers (i.e. model depth) to be 12. In this paper, we revisit these conventional configurations by studying the the relationship between transformer configuration and training objective. We show that the optimal transformer configuration is closely related to the training objective. Specifically, compared with the simple classification objective, the masked autoencoder is effective in alleviating the over-smoothing issue in deep transformer training. Based on this finding, we propose “Bamboo”, a notion of using deeper and narrower transformer configurations, for masked autoencoder training. On ImageNet, with such a simple change in configuration, the re-designed Base-level transformer achieves 84.2% top-1 accuracy and outperforms SoTA models like MAE by $0.9%$. On language tasks, re-designed model outperforms BERT with the default setting by 1.1 points on average, on GLUE benchmark with 8 datasets.
https://proceedings.mlr.press/v202/xue23c.html
https://proceedings.mlr.press/v202/xue23c/xue23c.pdf
https://openreview.net/forum?id=P98vAWoj5W
LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation
https://proceedings.mlr.press/v202/xue23c.html
Rui Xue, Haoyu Han, Mohamadali Torkamani, Jian Pei, Xiaorui Liu
https://proceedings.mlr.press/v202/xue23c.html
ICML 2023
Recent works have demonstrated the benefits of capturing long-distance dependency in graphs by deeper graph neural networks (GNNs). But deeper GNNs suffer from the long-lasting scalability challenge due to the neighborhood explosion problem in large-scale graphs. In this work, we propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning. Moreover, we demonstrate that LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN. Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks. The implementation of LazyGNN is available at https: //github.com/RXPHD/Lazy_GNN.
https://proceedings.mlr.press/v202/xue23d.html
https://proceedings.mlr.press/v202/xue23d/xue23d.pdf
https://openreview.net/forum?id=0BS36re3Cx
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression
https://proceedings.mlr.press/v202/xue23d.html
Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, Baharan Mirzasoleiman
https://proceedings.mlr.press/v202/xue23d.html
ICML 2023
Contrastive learning (CL) has emerged as a powerful technique for representation learning, with or without label supervision. However, supervised CL is prone to collapsing representations of subclasses within a class by not capturing all their features, and unsupervised CL may suppress harder class-relevant features by focusing on learning easy class-irrelevant features; both significantly compromise representation quality. Yet, there is no theoretical understanding of class collapse or feature suppression at test time. We provide the first unified theoretically rigorous framework to determine which features are learnt by CL. Our analysis indicate that, perhaps surprisingly, bias of (stochastic) gradient descent towards finding simpler solutions is a key factor in collapsing subclass representations and suppressing harder class-relevant features. Moreover, we present increasing embedding dimensionality and improving the quality of data augmentations as two theoretically motivated solutions to feature suppression. We also provide the first theoretical explanation for why employing supervised and unsupervised CL together yields higher-quality representations, even when using commonly-used stochastic gradient methods.
https://proceedings.mlr.press/v202/xue23e.html
https://proceedings.mlr.press/v202/xue23e/xue23e.pdf
https://openreview.net/forum?id=2bGTacOn8v
Adaptive Computation with Elastic Input Sequence
https://proceedings.mlr.press/v202/xue23e.html
Fuzhao Xue, Valerii Likhosherstov, Anurag Arnab, Neil Houlsby, Mostafa Dehghani, Yang You
https://proceedings.mlr.press/v202/xue23e.html
ICML 2023
Humans have the ability to adapt the type of information they use, the procedure they employ, and the amount of time they spend when solving problems. However, most standard neural networks have a fixed function type and computation budget regardless of the sample’s nature or difficulty. Adaptivity is a powerful paradigm as it not only imbues practitioners with flexibility pertaining to the downstream usage of these models but can also serve as a powerful inductive bias for solving certain challenging classes of problems. In this work, we introduce a new approach called AdaTape, which allows for dynamic computation in neural networks through adaptive tape tokens. AdaTape utilizes an elastic input sequence by equipping an architecture with a dynamic read-and-write tape. Specifically, we adaptively generate input sequences using tape tokens obtained from a tape bank which can be either trainable or derived from input data. We examine the challenges and requirements to obtain dynamic sequence content and length, and propose the Adaptive Tape Reading (ATR) algorithm to achieve both goals. Through extensive experiments on image recognition tasks, we show that AdaTape can achieve better performance while maintaining the computational cost. To facilitate further research, we have released code at https://github.com/google-research/scenic/tree/main/scenic/projects/adatape.
https://proceedings.mlr.press/v202/yamagata23a.html
https://proceedings.mlr.press/v202/yamagata23a/yamagata23a.pdf
https://openreview.net/forum?id=6lETsLXxta
Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL
https://proceedings.mlr.press/v202/yamagata23a.html
Taku Yamagata, Ahmed Khalil, Raul Santos-Rodriguez
https://proceedings.mlr.press/v202/yamagata23a.html
ICML 2023
Recent works have shown that tackling offline reinforcement learning (RL) with a conditional policy produces promising results. The Decision Transformer (DT) combines the conditional policy approach and a transformer architecture, showing competitive performance against several benchmarks. However, DT lacks stitching ability – one of the critical abilities for offline RL to learn the optimal policy from sub-optimal trajectories. This issue becomes particularly significant when the offline dataset only contains sub-optimal trajectories. On the other hand, the conventional RL approaches based on Dynamic Programming (such as Q-learning) do not have the same limitation; however, they suffer from unstable learning behaviours, especially when they rely on function approximation in an off-policy learning setting. In this paper, we propose the Q-learning Decision Transformer (QDT) to address the shortcomings of DT by leveraging the benefits of Dynamic Programming (Q-learning). It utilises the Dynamic Programming results to relabel the return-to-go in the training data to then train the DT with the relabelled data. Our approach efficiently exploits the benefits of these two approaches and compensates for each other’s shortcomings to achieve better performance.
https://proceedings.mlr.press/v202/yamasaki23a.html
https://proceedings.mlr.press/v202/yamasaki23a/yamasaki23a.pdf
https://openreview.net/forum?id=ppTpHSw0kj
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
https://proceedings.mlr.press/v202/yamasaki23a.html
Hayata Yamasaki, Sathyawageeswar Subramanian, Satoshi Hayakawa, Sho Sonoda
https://proceedings.mlr.press/v202/yamasaki23a.html
ICML 2023
A significant challenge in the field of quantum machine learning (QML) is to establish applications of quantum computation to accelerate common tasks in machine learning such as those for neural networks. Ridgelet transform has been a fundamental mathematical tool in the theoretical studies of neural networks, but the practical applicability of ridgelet transform to conducting learning tasks was limited since its numerical implementation by conventional classical computation requires an exponential runtime $\exp(O(D))$ as data dimension $D$ increases. To address this problem, we develop a quantum ridgelet transform (QRT), which implements the ridgelet transform of a quantum state within a linear runtime $O(D)$ of quantum computation. As an application, we also show that one can use QRT as a fundamental subroutine for QML to efficiently find a sparse trainable subnetwork of large shallow wide neural networks without conducting large-scale optimization of the original network. This application discovers an efficient way in this regime to demonstrate the lottery ticket hypothesis on finding such a sparse trainable neural network. These results open an avenue of QML for accelerating learning tasks with commonly used classical neural networks.
https://proceedings.mlr.press/v202/yan23a.html
https://proceedings.mlr.press/v202/yan23a/yan23a.pdf
https://openreview.net/forum?id=n9cpi2MSew
Compressed Decentralized Proximal Stochastic Gradient Method for Nonconvex Composite Problems with Heterogeneous Data
https://proceedings.mlr.press/v202/yan23a.html
Yonggui Yan, Jie Chen, Pin-Yu Chen, Xiaodong Cui, Songtao Lu, Yangyang Xu
https://proceedings.mlr.press/v202/yan23a.html
ICML 2023
We first propose a decentralized proximal stochastic gradient tracking method (DProxSGT) for nonconvex stochastic composite problems, with data heterogeneously distributed on multiple workers in a decentralized connected network. To save communication cost, we then extend DProxSGT to a compressed method by compressing the communicated information. Both methods need only $\mathcal{O}(1)$ samples per worker for each proximal update, which is important to achieve good generalization performance on training deep neural networks. With a smoothness condition on the expected loss function (but not on each sample function), the proposed methods can achieve an optimal sample complexity result to produce a near-stationary point. Numerical experiments on training neural networks demonstrate the significantly better generalization performance of our methods over large-batch training methods and momentum variance-reduction methods and also, the ability of handling heterogeneous data by the gradient tracking scheme.
https://proceedings.mlr.press/v202/yan23b.html
https://proceedings.mlr.press/v202/yan23b/yan23b.pdf
https://openreview.net/forum?id=MxpU5qQZSb
Temporally Consistent Transformers for Video Generation
https://proceedings.mlr.press/v202/yan23b.html
Wilson Yan, Danijar Hafner, Stephen James, Pieter Abbeel
https://proceedings.mlr.press/v202/yan23b.html
ICML 2023
To generate accurate videos, algorithms have to understand the spatial and temporal dependencies in the world. Current algorithms enable accurate predictions over short horizons but tend to suffer from temporal inconsistencies. When generated content goes out of view and is later revisited, the model invents different content instead. Despite this severe limitation, no established benchmarks exist for video generation with long temporal dependencies. In this paper, we curate 3 challenging video datasets with long-range dependencies by rendering walks through 3D scenes of procedural mazes, Minecraft worlds, and indoor scans. We perform a comprehensive evaluation of current models and observe their limitations in temporal consistency. Moreover, we introduce the Temporally Consistent Transformer (TECO), a generative model that substantially improves long-term consistency while also reducing sampling time. By compressing its input sequence into fewer embeddings, applying a temporal transformer, and expanding back using a spatial MaskGit, TECO outperforms existing models across many metrics. Videos are available on the website: https://wilson1yan.github.io/teco
https://proceedings.mlr.press/v202/yan23c.html
https://proceedings.mlr.press/v202/yan23c/yan23c.pdf
https://openreview.net/forum?id=0tLjOxqjLS
Distortion and Uncertainty Aware Loss for Panoramic Depth Completion
https://proceedings.mlr.press/v202/yan23c.html
Zhiqiang Yan, Xiang Li, Kun Wang, Shuo Chen, Jun Li, Jian Yang
https://proceedings.mlr.press/v202/yan23c.html
ICML 2023
Standard MSE or MAE loss function is commonly used in limited field-of-vision depth completion, treating each pixel equally under a basic assumption that all pixels have same contribution during optimization. Recently, with the rapid rise of panoramic photography, panoramic depth completion (PDC) has raised increasing attention in 3D computer vision. However, the assumption is inapplicable to panoramic data due to its latitude-wise distortion and high uncertainty nearby textures and edges. To handle these challenges, we propose distortion and uncertainty aware loss (DUL) that consists of a distortion-aware loss and an uncertainty-aware loss. The distortion-aware loss is designed to tackle the panoramic distortion caused by equirectangular projection, whose coordinate transformation relation is used to adaptively calculate the weight of the latitude-wise distortion, distributing uneven importance instead of the equal treatment for each pixel. The uncertainty-aware loss is presented to handle the inaccuracy in non-smooth regions. Specifically, we characterize uncertainty into PDC solutions under Bayesian deep learning framework, where a novel consistent uncertainty estimation constraint is designed to learn the consistency between multiple uncertainty maps of a single panorama. This consistency constraint allows model to produce more precise uncertainty estimation that is robust to feature deformation. Extensive experiments show the superiority of our method over standard loss functions, reaching the state of the art.
https://proceedings.mlr.press/v202/yan23d.html
https://proceedings.mlr.press/v202/yan23d/yan23d.pdf
https://openreview.net/forum?id=JPMT9kjeJi
Self-Interpretable Time Series Prediction with Counterfactual Explanations
https://proceedings.mlr.press/v202/yan23d.html
Jingquan Yan, Hao Wang
https://proceedings.mlr.press/v202/yan23d.html
ICML 2023
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy.
https://proceedings.mlr.press/v202/yan23e.html
https://proceedings.mlr.press/v202/yan23e/yan23e.pdf
https://openreview.net/forum?id=mLOWz0e1Yq
Quantum 3D Graph Learning with Applications to Molecule Embedding
https://proceedings.mlr.press/v202/yan23e.html
Ge Yan, Huaijin Wu, Junchi Yan
https://proceedings.mlr.press/v202/yan23e.html
ICML 2023
Learning 3D graph with spatial position as well as node attributes has been recently actively studied, for its utility in different applications e.g. 3D molecules. Quantum computing is known a promising direction for its potential theoretical supremacy for large-scale graph and combinatorial problem as well as the increasing evidence for the availability to physical quantum devices in the near term. In this paper, for the first time to our best knowledge, we propose a quantum 3D embedding ansatz that learns the latent representation of 3D structures from the Hilbert space composed of the Bloch sphere of each qubit. Specifically, the 3D Cartesian coordinates of nodes are converted into rotation and torsion angles and then encode them into the form of qubits. Moreover, Parameterized Quantum Circuit (PQC) is applied to serve as the trainable layers and the output of the PQC is adopted as the final node embedding. Experimental results on two downstream tasks, molecular property prediction and 3D molecular geometries generation, demonstrate the effectiveness of our model. We show the capacity and capability of our model with the evaluation on the QM9 dataset (134k molecules) with very few parameters, and its potential to be executed on a real quantum device.
https://proceedings.mlr.press/v202/yan23f.html
https://proceedings.mlr.press/v202/yan23f/yan23f.pdf
https://openreview.net/forum?id=v8jOzpludB
Fast Rates in Time-Varying Strongly Monotone Games
https://proceedings.mlr.press/v202/yan23f.html
Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
https://proceedings.mlr.press/v202/yan23f.html
ICML 2023
Multi-player online games depict the interaction of multiple players with each other over time. Strongly monotone games are of particular interest since they have benign properties and also relate to many classic games that have applications in real life. Existing works mainly focus on the time-invariant case with provable guarantees established. However, the research of the more general time-varying games in changing environments is underexplored and the best-known result cannot match the guarantees in the time-invariant case. In this work, we present a new decentralized online algorithm for time-varying strongly monotone games, which greatly improves existing results and obtains fast rates, matching the best time-invariant guarantee without knowing the environmental non-stationarity. Furthermore, to achieve faster rates, we generalize the RVU property with smoothness and establish a series of problem-dependent bounds that also match the best time-invariant one. To realize all those results, we make a comprehensive use of the techniques in non-stationary and universal online learning.
https://proceedings.mlr.press/v202/yanagisawa23a.html
https://proceedings.mlr.press/v202/yanagisawa23a/yanagisawa23a.pdf
https://openreview.net/forum?id=DUGrwP6gfC
Proper Scoring Rules for Survival Analysis
https://proceedings.mlr.press/v202/yanagisawa23a.html
Hiroki Yanagisawa
https://proceedings.mlr.press/v202/yanagisawa23a.html
ICML 2023
Survival analysis is the problem of estimating probability distributions for future event times, which can be seen as a problem in uncertainty quantification. Although there are fundamental theories on strictly proper scoring rules for uncertainty quantification, little is known about those for survival analysis. In this paper, we investigate extensions of four major strictly proper scoring rules for survival analysis and we prove that these extensions are proper under certain conditions, which arise from the discretization of the estimation of probability distributions. We also compare the estimation performances of these extended scoring rules by using real datasets, and the extensions of the logarithmic score and the Brier score performed the best.
https://proceedings.mlr.press/v202/yang23a.html
https://proceedings.mlr.press/v202/yang23a/yang23a.pdf
https://openreview.net/forum?id=AfHIuNCzV4
Behavior Contrastive Learning for Unsupervised Skill Discovery
https://proceedings.mlr.press/v202/yang23a.html
Rushuai Yang, Chenjia Bai, Hongyi Guo, Siyuan Li, Bin Zhao, Zhen Wang, Peng Liu, Xuelong Li
https://proceedings.mlr.press/v202/yang23a.html
ICML 2023
In reinforcement learning, unsupervised skill discovery aims to learn diverse skills without extrinsic rewards. Previous methods discover skills by maximizing the mutual information (MI) between states and skills. However, such an MI objective tends to learn simple and static skills and may hinder exploration. In this paper, we propose a novel unsupervised skill discovery method through contrastive learning among behaviors, which makes the agent produce similar behaviors for the same skill and diverse behaviors for different skills. Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill, which serves as an upper bound of the previous MI objective. Meanwhile, our method implicitly increases the state entropy to obtain better state coverage. We evaluate our method on challenging mazes and continuous control tasks. The results show that our method generates diverse and far-reaching skills, and also obtains competitive performance in downstream tasks compared to the state-of-the-art methods.
https://proceedings.mlr.press/v202/yang23b.html
https://proceedings.mlr.press/v202/yang23b/yang23b.pdf
https://openreview.net/forum?id=qn9ZWZ3Pg7
Nested Elimination: A Simple Algorithm for Best-Item Identification From Choice-Based Feedback
https://proceedings.mlr.press/v202/yang23b.html
Junwen Yang, Yifan Feng
https://proceedings.mlr.press/v202/yang23b.html
ICML 2023
We study the problem of best-item identification from choice-based feedback. In this problem, a company sequentially and adaptively shows display sets to a population of customers and collects their choices. The objective is to identify the most preferred item with the least number of samples and at a high confidence level. We propose an elimination-based algorithm, namely Nested Elimination (NE), which is inspired by the nested structure implied by the information-theoretic lower bound. NE is simple in structure, easy to implement, and has a strong theoretical guarantee for sample complexity. Specifically, NE utilizes an innovative elimination criterion and circumvents the need to solve any complex combinatorial optimization problem. We provide an instance-specific and non-asymptotic bound on the expected sample complexity of NE. We also show NE achieves high-order worst-case asymptotic optimality. Finally, numerical experiments from both synthetic and real data corroborate our theoretical findings.
https://proceedings.mlr.press/v202/yang23c.html
https://proceedings.mlr.press/v202/yang23c/yang23c.pdf
https://openreview.net/forum?id=OYIIEfy4zw
Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering
https://proceedings.mlr.press/v202/yang23c.html
Mingqi Yang, Wenjie Feng, Yanming Shen, Bryan Hooi
https://proceedings.mlr.press/v202/yang23c.html
ICML 2023
Proposing an effective and flexible matrix to represent a graph is a fundamental challenge that has been explored from multiple perspectives, e.g., filtering in Graph Fourier Transforms. In this work, we develop a novel and general framework which unifies many existing GNN models from the view of parameterized decomposition and filtering, and show how it helps to enhance the flexibility of GNNs while alleviating the smoothness and amplification issues of existing models. Essentially, we show that the extensively studied spectral graph convolutions with learnable polynomial filters are constrained variants of this formulation, and releasing these constraints enables our model to express the desired decomposition and filtering simultaneously. Based on this generalized framework, we develop models that are simple in implementation but achieve significant improvements and computational efficiency on a variety of graph learning tasks. Code is available at https://github.com/qslim/PDF.
https://proceedings.mlr.press/v202/yang23d.html
https://proceedings.mlr.press/v202/yang23d/yang23d.pdf
https://openreview.net/forum?id=6rlGbYv4bT
Weighted Flow Diffusion for Local Graph Clustering with Node Attributes: an Algorithm and Statistical Guarantees
https://proceedings.mlr.press/v202/yang23d.html
Shenghao Yang, Kimon Fountoulakis
https://proceedings.mlr.press/v202/yang23d.html
ICML 2023
Local graph clustering methods aim to detect small clusters in very large graphs without the need to process the whole graph. They are fundamental and scalable tools for a wide range of tasks such as local community detection, node ranking and node embedding. While prior work on local graph clustering mainly focuses on graphs without node attributes, modern real-world graph datasets typically come with node attributes that provide valuable additional information. We present a simple local graph clustering algorithm for graphs with node attributes, based on the idea of diffusing mass locally in the graph while accounting for both structural and attribute proximities. Using high-dimensional concentration results, we provide statistical guarantees on the performance of the algorithm for the recovery of a target cluster with a single seed node. We give conditions under which a target cluster generated from a fairly general contextual random graph model, which includes both the stochastic block model and the planted cluster model as special cases, can be fully recovered with bounded false positives. Empirically, we validate all theoretical claims using synthetic data, and we show that incorporating node attributes leads to superior local clustering performances using real-world graph datasets.
https://proceedings.mlr.press/v202/yang23e.html
https://proceedings.mlr.press/v202/yang23e/yang23e.pdf
https://openreview.net/forum?id=7DnvWyVkUo
Chemically Transferable Generative Backmapping of Coarse-Grained Proteins
https://proceedings.mlr.press/v202/yang23e.html
Soojung Yang, Rafael Gomez-Bombarelli
https://proceedings.mlr.press/v202/yang23e.html
ICML 2023
Coarse-graining (CG) accelerates molecular simulations of protein dynamics by simulating sets of atoms as singular beads. Backmapping is the opposite operation of bringing lost atomistic details back from the CG representation. While machine learning (ML) has produced accurate and efficient CG simulations of proteins, fast and reliable backmapping remains a challenge. Rule-based methods produce poor all-atom geometries, needing computationally costly refinement through additional simulations. Recently proposed ML approaches outperform traditional baselines but are not transferable between proteins and sometimes generate unphysical atom placements with steric clashes and implausible torsion angles. This work addresses both issues to build a fast, transferable, and reliable generative backmapping tool for CG protein representations. We achieve generalization and reliability through a combined set of innovations: representation based on internal coordinates; an equivariant encoder/prior; a custom loss function that helps ensure local structure, global structure, and physical constraints; and expert curation of high-quality out-of-equilibrium protein data for training. Our results pave the way for out-of-the-box backmapping of coarse-grained simulations for arbitrary proteins.
https://proceedings.mlr.press/v202/yang23f.html
https://proceedings.mlr.press/v202/yang23f/yang23f.pdf
https://openreview.net/forum?id=K53zoOWF8g
Data Poisoning Attacks Against Multimodal Encoders
https://proceedings.mlr.press/v202/yang23f.html
Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang
https://proceedings.mlr.press/v202/yang23f.html
ICML 2023
Recently, the newly emerged multimodal models, which leverage both visual and linguistic modalities to train powerful encoders, have gained increasing attention. However, learning from a large-scale unlabeled dataset also exposes the model to the risk of potential poisoning attacks, whereby the adversary aims to perturb the model’s training data to trigger malicious behaviors in it. In contrast to previous work, only poisoning visual modality, in this work, we take the first step to studying poisoning attacks against multimodal models in both visual and linguistic modalities. Specially, we focus on answering two questions: (1) Is the linguistic modality also vulnerable to poisoning attacks? and (2) Which modality is most vulnerable? To answer the two questions, we propose three types of poisoning attacks against multimodal models. Extensive evaluations on different datasets and model architectures show that all three attacks can achieve significant attack performance while maintaining model utility in both visual and linguistic modalities. Furthermore, we observe that the poisoning effect differs between different modalities. To mitigate the attacks, we propose both pre-training and post-training defenses. We empirically show that both defenses can significantly reduce the attack performance while preserving the model’s utility. Our code is available at https://github.com/zqypku/mm_poison/.
https://proceedings.mlr.press/v202/yang23g.html
https://proceedings.mlr.press/v202/yang23g/yang23g.pdf
https://openreview.net/forum?id=ASOCqTnWIY
Towards Sustainable Learning: Coresets for Data-efficient Deep Learning
https://proceedings.mlr.press/v202/yang23g.html
Yu Yang, Hao Kang, Baharan Mirzasoleiman
https://proceedings.mlr.press/v202/yang23g.html
ICML 2023
To improve the efficiency and sustainability of learning deep models, we propose CREST, the first scalable framework with rigorous theoretical guarantees to identify the most valuable examples for training non-convex models, particularly deep networks. To guarantee convergence to a stationary point of a non-convex function, CREST models the non-convex loss as a series of quadratic functions and extracts a coreset for each quadratic sub-region. In addition, to ensure faster convergence of stochastic gradient methods such as (mini-batch) SGD, CREST iteratively extracts multiple mini-batch coresets from larger random subsets of training data, to ensure nearly-unbiased gradients with small variances. Finally, to further improve scalability and efficiency, CREST identifies and excludes the examples that are learned from the coreset selection pipeline. Our extensive experiments on several deep networks trained on vision and NLP datasets, including CIFAR-10, CIFAR-100, TinyImageNet, and SNLI, confirm that CREST speeds up training deep networks on very large datasets, by 1.7x to 2.5x with minimum loss in the performance. By analyzing the learning difficulty of the subsets selected by CREST, we show that deep models benefit the most by learning from subsets of increasing difficulty levels.
https://proceedings.mlr.press/v202/yang23h.html
https://proceedings.mlr.press/v202/yang23h/yang23h.pdf
https://openreview.net/forum?id=uSF5isjdSQ
Improving Adversarial Robustness by Putting More Regularizations on Less Robust Samples
https://proceedings.mlr.press/v202/yang23h.html
Dongyoon Yang, Insung Kong, Yongdai Kim
https://proceedings.mlr.press/v202/yang23h.html
ICML 2023
Adversarial training, which is to enhance robustness against adversarial attacks, has received much attention because it is easy to generate human-imperceptible perturbations of data to deceive a given deep neural network. In this paper, we propose a new adversarial training algorithm that is theoretically well motivated and empirically superior to other existing algorithms. A novel feature of the proposed algorithm is to apply more regularization to data vulnerable to adversarial attacks than other existing regularization algorithms do. Theoretically, we show that our algorithm can be understood as an algorithm of minimizing a newly derived upper bound of the robust risk. Numerical experiments illustrate that our proposed algorithm improves the generalization (accuracy on examples) and robustness (accuracy on adversarial attacks) simultaneously to achieve the state-of-the-art performance.
https://proceedings.mlr.press/v202/yang23i.html
https://proceedings.mlr.press/v202/yang23i/yang23i.pdf
https://openreview.net/forum?id=sikdq1zHiX
Improving Adversarial Robustness of Deep Equilibrium Models with Explicit Regulations Along the Neural Dynamics
https://proceedings.mlr.press/v202/yang23i.html
Zonghan Yang, Peng Li, Tianyu Pang, Yang Liu
https://proceedings.mlr.press/v202/yang23i.html
ICML 2023
Deep equilibrium (DEQ) models replace the multiple-layer stacking of conventional deep networks with a fixed-point iteration of a single-layer transformation. Having been demonstrated to be competitive in a variety of real-world scenarios, the adversarial robustness of general DEQs becomes increasingly crucial for their reliable deployment. Existing works improve the robustness of general DEQ models with the widely-used adversarial training (AT) framework, but they fail to exploit the structural uniquenesses of DEQ models. To this end, we interpret DEQs through the lens of neural dynamics and find that AT under-regulates intermediate states. Besides, the intermediate states typically provide predictions with a high prediction entropy. Informed by the correlation between the entropy of dynamical systems and their stability properties, we propose reducing prediction entropy by progressively updating inputs along the neural dynamics. During AT, we also utilize random intermediate states to compute the loss function. Our methods regulate the neural dynamics of DEQ models in this manner. Extensive experiments demonstrate that our methods substantially increase the robustness of DEQ models and even outperform the strong deep network baselines.
https://proceedings.mlr.press/v202/yang23j.html
https://proceedings.mlr.press/v202/yang23j/yang23j.pdf
https://openreview.net/forum?id=LihAbUvtLG
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
https://proceedings.mlr.press/v202/yang23j.html
Yu Yang, Besmira Nushi, Hamid Palangi, Baharan Mirzasoleiman
https://proceedings.mlr.press/v202/yang23j.html
ICML 2023
Spurious correlations that degrade model generalization or lead the model to be right for the wrong reasons are one of the main robustness concerns for real-world deployments. However, mitigating these correlations during pre-training for large-scale models can be costly and impractical, particularly for those without access to high-performance computing resources. This paper proposes a novel approach to address spurious correlations during fine-tuning for a given domain of interest. With a focus on multi-modal models (e.g., CLIP), the proposed method leverages different modalities in these models to detect and explicitly set apart spurious attributes from the affected class, achieved through a multi-modal contrastive loss function that expresses spurious relationships through language. Our experimental results and in-depth visualizations on CLIP show that such an intervention can effectively i) improve the model’s accuracy when spurious attributes are not present, and ii) directs the model’s activation maps towards the actual class rather than the spurious attribute when present. In particular, on the Waterbirds dataset, our algorithm achieved a worst-group accuracy 23% higher than ERM on CLIP with a ResNet-50 backbone, and 32% higher on CLIP with a ViT backbone, while maintaining the same average accuracy as ERM.
https://proceedings.mlr.press/v202/yang23k.html
https://proceedings.mlr.press/v202/yang23k/yang23k.pdf
https://openreview.net/forum?id=Jrc3VDPMt0
A theory of representation learning gives a deep generalisation of kernel methods
https://proceedings.mlr.press/v202/yang23k.html
Adam X. Yang, Maxime Robeyns, Edward Milsom, Ben Anson, Nandi Schoots, Laurence Aitchison
https://proceedings.mlr.press/v202/yang23k.html
ICML 2023
The successes of modern deep machine learning methods are founded on their ability to transform inputs across multiple layers to build good high-level representations. It is therefore critical to understand this process of representation learning. However, standard theoretical approaches (formally NNGPs) involving infinite width limits eliminate representation learning. We therefore develop a new infinite width limit, the Bayesian representation learning limit, that exhibits representation learning mirroring that in finite-width models, yet at the same time, retains some of the simplicity of standard infinite-width limits. In particular, we show that Deep Gaussian processes (DGPs) in the Bayesian representation learning limit have exactly multivariate Gaussian posteriors, and the posterior covariances can be obtained by optimizing an interpretable objective combining a log-likelihood to improve performance with a series of KL-divergences which keep the posteriors close to the prior. We confirm these results experimentally in wide but finite DGPs. Next, we introduce the possibility of using this limit and objective as a flexible, deep generalisation of kernel methods, that we call deep kernel machines (DKMs). Like most naive kernel methods, DKMs scale cubically in the number of datapoints. We therefore use methods from the Gaussian process inducing point literature to develop a sparse DKM that scales linearly in the number of datapoints. Finally, we extend these approaches to NNs (which have non-Gaussian posteriors) in the Appendices.
https://proceedings.mlr.press/v202/yang23l.html
https://proceedings.mlr.press/v202/yang23l/yang23l.pdf
https://openreview.net/forum?id=5yWTeqwv8t
Efficient Algorithms for Exact Graph Matching on Correlated Stochastic Block Models with Constant Correlation
https://proceedings.mlr.press/v202/yang23l.html
Joonhyuk Yang, Dongpil Shin, Hye Won Chung
https://proceedings.mlr.press/v202/yang23l.html
ICML 2023
We consider the problem of graph matching, or learning vertex correspondence, between two correlated stochastic block models (SBMs). The graph matching problem arises in various fields, including computer vision, natural language processing and bioinformatics, and in particular, matching graphs with inherent community structure has significance related to de-anonymization of correlated social networks. Compared to the correlated Erdos-Renyi (ER) model, where various efficient algorithms have been developed, among which a few algorithms have been proven to achieve the exact matching with constant edge correlation, no low-order polynomial algorithm has been known to achieve exact matching for the correlated SBMs with constant correlation. In this work, we propose an efficient algorithm for matching graphs with community structure, based on the comparison between partition trees rooted from each vertex, by extending the idea of Mao et al. (2021) to graphs with communities. The partition tree divides the large neighborhoods of each vertex into disjoint subsets using their edge statistics to different communities. Our algorithm is the first low-order polynomial-time algorithm achieving exact matching between two correlated SBMs with high probability in dense graphs.
https://proceedings.mlr.press/v202/yang23m.html
https://proceedings.mlr.press/v202/yang23m/yang23m.pdf
https://openreview.net/forum?id=lwodnXJzu6
Are Neurons Actually Collapsed? On the Fine-Grained Structure in Neural Representations
https://proceedings.mlr.press/v202/yang23m.html
Yongyi Yang, Jacob Steinhardt, Wei Hu
https://proceedings.mlr.press/v202/yang23m.html
ICML 2023
Recent work has observed an intriguing "Neural Collapse” phenomenon in well-trained neural networks, where the last-layer representations of training samples with the same label collapse into each other. This appears to suggest that the last-layer representations are completely determined by the labels, and do not depend on the intrinsic structure of input distribution. We provide evidence that this is not a complete description, and that the apparent collapse hides important fine-grained structure in the representations. Specifically, even when representations apparently collapse, the small amount of remaining variation can still faithfully and accurately captures the intrinsic structure of input distribution. As an example, if we train on CIFAR-10 using only 5 coarse-grained labels (by combining two classes into one super-class) until convergence, we can reconstruct the original 10-class labels from the learned representations via unsupervised clustering. The reconstructed labels achieve 93% accuracy on the CIFAR-10 test set, nearly matching the normal CIFAR-10 accuracy for the same architecture. We also provide an initial theoretical result showing the fine-grained representation structure in a simplified synthetic setting. Our results show concretely how the structure of input data can play a significant role in determining the fine-grained structure of neural representations, going beyond what Neural Collapse predicts.
https://proceedings.mlr.press/v202/yang23n.html
https://proceedings.mlr.press/v202/yang23n/yang23n.pdf
https://openreview.net/forum?id=6bBla9LAJ2
Generative Adversarial Symmetry Discovery
https://proceedings.mlr.press/v202/yang23n.html
Jianke Yang, Robin Walters, Nima Dehmamy, Rose Yu
https://proceedings.mlr.press/v202/yang23n.html
ICML 2023
Despite the success of equivariant neural networks in scientific applications, they require knowing the symmetry group a priori. However, it may be difficult to know which symmetry to use as an inductive bias in practice. Enforcing the wrong symmetry could even hurt the performance. In this paper, we propose a framework, LieGAN, to automatically discover equivariances from a dataset using a paradigm akin to generative adversarial training. Specifically, a generator learns a group of transformations applied to the data, which preserve the original distribution and fool the discriminator. LieGAN represents symmetry as interpretable Lie algebra basis and can discover various symmetries such as the rotation group $\mathrm{SO}(n)$, restricted Lorentz group $\mathrm{SO}(1,3)^+$ in trajectory prediction and top-quark tagging tasks. The learned symmetry can also be readily used in several existing equivariant neural networks to improve accuracy and generalization in prediction.
https://proceedings.mlr.press/v202/yang23o.html
https://proceedings.mlr.press/v202/yang23o/yang23o.pdf
https://openreview.net/forum?id=XiGijCSGjx
Boosting Offline Reinforcement Learning with Action Preference Query
https://proceedings.mlr.press/v202/yang23o.html
Qisen Yang, Shenzhi Wang, Matthieu Gaetan Lin, Shiji Song, Gao Huang
https://proceedings.mlr.press/v202/yang23o.html
ICML 2023
Training practical agents usually involve offline and online reinforcement learning (RL) to balance the policy’s performance and interaction costs. In particular, online fine-tuning has become a commonly used method to correct the erroneous estimates of out-of-distribution data learned in the offline training phase. However, even limited online interactions can be inaccessible or catastrophic for high-stake scenarios like healthcare and autonomous driving. In this work, we introduce an interaction-free training scheme dubbed Offline-with-Action-Preferences (OAP). The main insight is that, compared to online fine-tuning, querying the preferences between pre-collected and learned actions can be equally or even more helpful to the erroneous estimate problem. By adaptively encouraging or suppressing policy constraint according to action preferences, OAP could distinguish overestimation from beneficial policy improvement and thus attains a more accurate evaluation of unseen data. Theoretically, we prove a lower bound of the behavior policy’s performance improvement brought by OAP. Moreover, comprehensive experiments on the D4RL benchmark and state-of-the-art algorithms demonstrate that OAP yields higher (29% on average) scores, especially on challenging AntMaze tasks (98% higher).
https://proceedings.mlr.press/v202/yang23p.html
https://proceedings.mlr.press/v202/yang23p/yang23p.pdf
https://openreview.net/forum?id=DXWm3vnG6P
Towards Controlled Data Augmentations for Active Learning
https://proceedings.mlr.press/v202/yang23p.html
Jianan Yang, Haobo Wang, Sai Wu, Gang Chen, Junbo Zhao
https://proceedings.mlr.press/v202/yang23p.html
ICML 2023
The mission of active learning is to identify the most valuable data samples, thus attaining decent performance with much fewer samples. The data augmentation techniques seem straightforward yet promising to enhance active learning by extending the exploration of the input space, which helps locate more valuable samples. In this work, we thoroughly study the coupling of data augmentation and active learning, thereby proposing Controllable Augmentation ManiPulator for Active Learning. In contrast to the few prior works that touched on this line, CAMPAL emphasizes a purposeful, tighten, and better-controlled integration of data augmentation into active learning in three folds: (i)-carefully designed augmentation policies applied separately on labeled and unlabeled data pools; (ii)-controlled and quantifiably optimizable augmentation strengths; (iii)-full and flexible coverage for most (if not all) active learning schemes. Theories are proposed and associated with the development of key components in CAMPAL. Through extensive empirical experiments, we bring the performance of active learning methods to a new level: an absolute performance boost of 16.99% on CIFAR-10 and 12.25 on SVHN with 1,000 annotated samples. Codes are available at https://github.com/jnzju/CAMPAL.
https://proceedings.mlr.press/v202/yang23q.html
https://proceedings.mlr.press/v202/yang23q/yang23q.pdf
https://openreview.net/forum?id=UrQySwOk4q
What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?
https://proceedings.mlr.press/v202/yang23q.html
Rui Yang, Lin Yong, Xiaoteng Ma, Hao Hu, Chongjie Zhang, Tong Zhang
https://proceedings.mlr.press/v202/yang23q.html
ICML 2023
Offline goal-conditioned RL (GCRL) offers a way to train general-purpose agents from fully offline datasets. In addition to being conservative within the dataset, the generalization ability to achieve unseen goals is another fundamental challenge for offline GCRL. However, to the best of our knowledge, this problem has not been well studied yet. In this paper, we study out-of-distribution (OOD) generalization of offline GCRL both theoretically and empirically to identify factors that are important. In a number of experiments, we observe that weighted imitation learning enjoys better generalization than pessimism-based offline RL method. Based on this insight, we derive a theory for OOD generalization, which characterizes several important design choices. We then propose a new offline GCRL method, Generalizable Offline goAl-condiTioned RL (GOAT), by combining the findings from our theoretical and empirical studies. On a new benchmark containing 9 independent identically distributed (IID) tasks and 17 OOD tasks, GOAT outperforms current state-of-the-art methods by a large margin.
https://proceedings.mlr.press/v202/yang23r.html
https://proceedings.mlr.press/v202/yang23r/yang23r.pdf
https://openreview.net/forum?id=jTfTEdIPYu
Neural Prediction Errors enable Analogical Visual Reasoning in Human Standard Intelligence Tests
https://proceedings.mlr.press/v202/yang23r.html
Lingxiao Yang, Hongzhi You, Zonglei Zhen, Dahui Wang, Xiaohong Wan, Xiaohua Xie, Ru-Yuan Zhang
https://proceedings.mlr.press/v202/yang23r.html
ICML 2023
Deep neural networks have long been criticized for lacking the ability to perform analogical visual reasoning. Here, we propose a neural network model to solve Raven’s Progressive Matrices (RPM) - one of the standard intelligence tests in human psychology. Specifically, we design a reasoning block based on the well-known concept of prediction error (PE) in neuroscience. Our reasoning block uses convolution to extract abstract rules from high-level visual features of the 8 context images and generates the features of a predicted answer. PEs are then calculated between the predicted features and those of the 8 candidate answers, and are then passed to the next stage. We further integrate our novel reasoning blocks into a residual network and build a new Predictive Reasoning Network (PredRNet). Extensive experiments show that our proposed PredRNet achieves state-of-the-art average performance on several important RPM benchmarks. PredRNet also shows good generalization abilities in a variety of out-of-distribution scenarios and other visual reasoning tasks. Most importantly, our PredRNet forms low-dimensional representations of abstract rules and minimizes hierarchical prediction errors during model training, supporting the critical role of PE minimization in visual reasoning. Our work highlights the potential of using neuroscience theories to solve abstract visual reasoning problems in artificial intelligence. The code is available at https://github.com/ZjjConan/AVR-PredRNet.
https://proceedings.mlr.press/v202/yang23s.html
https://proceedings.mlr.press/v202/yang23s/yang23s.pdf
https://openreview.net/forum?id=wwR38qFs3i
Change is Hard: A Closer Look at Subpopulation Shift
https://proceedings.mlr.press/v202/yang23s.html
Yuzhe Yang, Haoran Zhang, Dina Katabi, Marzyeh Ghassemi
https://proceedings.mlr.press/v202/yang23s.html
ICML 2023
Machine learning models often perform poorly on subgroups that are underrepresented in the training data. Yet, little is understood on the variation in mechanisms that cause subpopulation shifts, and how algorithms generalize across such diverse shifts at scale. In this work, we provide a fine-grained analysis of subpopulation shift. We first propose a unified framework that dissects and explains common shifts in subgroups. We then establish a comprehensive benchmark of 20 state-of-the-art algorithms evaluated on 12 real-world datasets in vision, language, and healthcare domains. With results obtained from training over 10,000 models, we reveal intriguing observations for future progress in this space. First, existing algorithms only improve subgroup robustness over certain types of shifts but not others. Moreover, while current algorithms rely on group-annotated validation data for model selection, we find that a simple selection criterion based on worst-class accuracy is surprisingly effective even without any group information. Finally, unlike existing works that solely aim to improve worst-group accuracy (WGA), we demonstrate the fundamental tradeoff between WGA and other important metrics, highlighting the need to carefully choose testing metrics. Code and data are available at: https://github.com/YyzHarry/SubpopBench.
https://proceedings.mlr.press/v202/yang23t.html
https://proceedings.mlr.press/v202/yang23t/yang23t.pdf
https://openreview.net/forum?id=IqI8074rFu
Continual Task Allocation in Meta-Policy Network via Sparse Prompting
https://proceedings.mlr.press/v202/yang23t.html
Yijun Yang, Tianyi Zhou, Jing Jiang, Guodong Long, Yuhui Shi
https://proceedings.mlr.press/v202/yang23t.html
ICML 2023
How to train a generalizable meta-policy by continually learning a sequence of tasks? It is a natural human skill yet challenging to achieve by current reinforcement learning: the agent is expected to quickly adapt to new tasks (plasticity) meanwhile retaining the common knowledge from previous tasks (stability). We address it by "Continual Task Allocation via Sparse Prompting (CoTASP)", which learns over-complete dictionaries to produce sparse masks as prompts extracting a sub-network for each task from a meta-policy network. CoTASP trains a policy for each task by optimizing the prompts and the sub-network weights alternatively. The dictionary is then updated to align the optimized prompts with tasks’ embedding, thereby capturing tasks’ semantic correlations. Hence, relevant tasks share more neurons in the meta-policy network due to similar prompts while cross-task interference causing forgetting is effectively restrained. Given a meta-policy and dictionaries trained on previous tasks, new task adaptation reduces to highly efficient sparse prompting and sub-network finetuning. In experiments, CoTASP achieves a promising plasticity-stability trade-off without storing or replaying any past tasks’ experiences. It outperforms existing continual and multi-task RL methods on all seen tasks, forgetting reduction, and generalization to unseen tasks.
https://proceedings.mlr.press/v202/yang23u.html
https://proceedings.mlr.press/v202/yang23u/yang23u.pdf
https://openreview.net/forum?id=9CZZ8tIhSv
Hyperbolic Representation Learning: Revisiting and Advancing
https://proceedings.mlr.press/v202/yang23u.html
Menglin Yang, Min Zhou, Rex Ying, Yankai Chen, Irwin King
https://proceedings.mlr.press/v202/yang23u.html
ICML 2023
The non-Euclidean geometry of hyperbolic spaces has recently garnered considerable attention in the realm of representation learning. Current endeavors in hyperbolic representation largely presuppose that the underlying hierarchies can be automatically inferred and preserved through the adaptive optimization process. This assumption, however, is questionable and requires further validation. In this work, we first introduce a position-tracking mechanism to scrutinize existing prevalent hyperbolic models, revealing that the learned representations are sub-optimal and unsatisfactory. To address this, we propose a simple yet effective method, hyperbolic informed embedding (HIE), by incorporating cost-free hierarchical information deduced from the hyperbolic distance of the node to the origin (i.e., induced hyperbolic norm) to advance existing hyperbolic models. The proposed method HIE is both task-agnostic and model-agnostic, enabling its seamless integration with a broad spectrum of models and tasks. Extensive experiments across various models and different tasks demonstrate the versatility and adaptability of the proposed method. Remarkably, our method achieves a remarkable improvement of up to 21.4% compared to the competing baselines.
https://proceedings.mlr.press/v202/yao23a.html
https://proceedings.mlr.press/v202/yao23a/yao23a.pdf
https://openreview.net/forum?id=HLERf0mkEF
Which is Better for Learning with Noisy Labels: The Semi-supervised Method or Modeling Label Noise?
https://proceedings.mlr.press/v202/yao23a.html
Yu Yao, Mingming Gong, Yuxuan Du, Jun Yu, Bo Han, Kun Zhang, Tongliang Liu
https://proceedings.mlr.press/v202/yao23a.html
ICML 2023
In real life, accurately annotating large-scale datasets is sometimes difficult. Datasets used for training deep learning models are likely to contain label noise. To make use of the dataset containing label noise, two typical methods have been proposed. One is to employ the semi-supervised method by exploiting labeled confident examples and unlabeled unconfident examples. The other one is to model label noise and design statistically consistent classifiers. A natural question remains unsolved: which one should be used for a specific real-world application? In this paper, we answer the question from the perspective of causal data generative process. Specifically, the performance of the semi-supervised based method depends heavily on the data generative process while the method modeling label-noise is not influenced by the generation process. For example, for a given dataset, if it has a causal generative structure that the features cause the label, the semi-supervised based method would not be helpful. When the causal structure is unknown, we provide an intuitive method to discover the causal structure for a given dataset containing label noise.
https://proceedings.mlr.press/v202/yao23b.html
https://proceedings.mlr.press/v202/yao23b/yao23b.pdf
https://openreview.net/forum?id=XAK3238obr
How Bad is Top-$K$ Recommendation under Competing Content Creators?
https://proceedings.mlr.press/v202/yao23b.html
Fan Yao, Chuanhao Li, Denis Nekipelov, Hongning Wang, Haifeng Xu
https://proceedings.mlr.press/v202/yao23b.html
ICML 2023
This study explores the impact of content creators’ competition on user welfare in recommendation platforms, as well as the long-term dynamics of relevance-driven recommendations. We establish a model of creator competition, under the setting where the platform uses a top-$K$ recommendation policy, user decisions are guided by the Random Utility model, and creators, in absence of explicit utility functions, employ arbitrary no-regret learning algorithms for strategy updates. We study the user welfare guarantee through the lens of Price of Anarchy and show that the fraction of user welfare loss due to creator competition is always upper bounded by a small constant depending on $K$ and randomness in user decisions; we also prove the tightness of this bound. Our result discloses an intrinsic merit of the relevance-driven recommendation policy, as long as users’ decisions involve randomness and the platform provides reasonably many alternatives to its users.
https://proceedings.mlr.press/v202/yao23c.html
https://proceedings.mlr.press/v202/yao23c/yao23c.pdf
https://openreview.net/forum?id=mernbGTe24
MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale Training of Physics-informed Neural Networks
https://proceedings.mlr.press/v202/yao23c.html
Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu
https://proceedings.mlr.press/v202/yao23c.html
ICML 2023
Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs) in various fields by minimizing a weighted sum of PDE loss and boundary loss. However, there are several critical challenges in the training of PINNs, including the lack of theoretical frameworks and the imbalance between PDE loss and boundary loss. In this paper, we present an analysis of second-order non-homogeneous PDEs, which are classified into three categories and applicable to various common problems. We also characterize the connections between the training loss and actual error, guaranteeing convergence under mild conditions. The theoretical analysis inspires us to further propose MultiAdam, a scale-invariant optimizer that leverages gradient momentum to parameter-wisely balance the loss terms. Extensive experiment results on multiple problems from different physical domains demonstrate that our MultiAdam solver can improve the predictive accuracy by 1-2 orders of magnitude compared with strong baselines.
https://proceedings.mlr.press/v202/yardim23a.html
https://proceedings.mlr.press/v202/yardim23a/yardim23a.pdf
https://openreview.net/forum?id=AwxfYvdPZV
Policy Mirror Ascent for Efficient and Independent Learning in Mean Field Games
https://proceedings.mlr.press/v202/yardim23a.html
Batuhan Yardim, Semih Cayci, Matthieu Geist, Niao He
https://proceedings.mlr.press/v202/yardim23a.html
ICML 2023
Mean-field games have been used as a theoretical tool to obtain an approximate Nash equilibrium for symmetric and anonymous $N$-player games. However, limiting applicability, existing theoretical results assume variations of a “population generative model”, which allows arbitrary modifications of the population distribution by the learning algorithm. Moreover, learning algorithms typically work on abstract simulators with population instead of the $N$-player game. Instead, we show that $N$ agents running policy mirror ascent converge to the Nash equilibrium of the regularized game within $\widetilde{\mathcal{O}}(\varepsilon^{-2})$ samples from a single sample trajectory without a population generative model, up to a standard $\mathcal{O}(\frac{1}{\sqrt{N}})$ error due to the mean field. Taking a divergent approach from the literature, instead of working with the best-response map we first show that a policy mirror ascent map can be used to construct a contractive operator having the Nash equilibrium as its fixed point. We analyze single-path TD learning for $N$-agent games, proving sample complexity guarantees by only using a sample path from the $N$-agent simulator without a population generative model. Furthermore, we demonstrate that our methodology allows for independent learning by $N$ agents with finite sample guarantees.
https://proceedings.mlr.press/v202/yasunaga23a.html
https://proceedings.mlr.press/v202/yasunaga23a/yasunaga23a.pdf
https://openreview.net/forum?id=VZ8bs0fwoO
Retrieval-Augmented Multimodal Language Modeling
https://proceedings.mlr.press/v202/yasunaga23a.html
Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-Tau Yih
https://proceedings.mlr.press/v202/yasunaga23a.html
ICML 2023
Recent multimodal models such as DALL-E and CM3 have achieved remarkable progress in text-to-image and image-to-text generation. However, these models store all their knowledge (e.g., the appearance of the Eiffel Tower) in the model parameters, requiring increasingly larger models and training data to capture more knowledge. To integrate knowledge in a more scalable and modular way, we propose a retrieval-augmented multimodal model, which enables a base multimodal model (generator) to refer to relevant text and images fetched by a retriever from external memory (e.g., documents on the web). Specifically, for the retriever, we use a pretrained CLIP, and for the generator, we train a CM3 Transformer on the LAION dataset. Our resulting model, named Retrieval-Augmented CM3 (RA-CM3), is the first multimodal model that can retrieve and generate both text and images. We show that RA-CM3 significantly outperforms baseline multimodal models such as DALL-E and CM3 on both image and caption generation tasks (12 FID and 17 CIDEr improvements on MS-COCO), while requiring much less compute for training ($<$30% of DALL-E). Moreover, we show that RA-CM3 exhibits novel capabilities such as faithful image generation and multimodal in-context learning (e.g., image generation from demonstrations).
https://proceedings.mlr.press/v202/ye23a.html
https://proceedings.mlr.press/v202/ye23a/ye23a.pdf
https://openreview.net/forum?id=ZvKWki48yP
On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness
https://proceedings.mlr.press/v202/ye23a.html
Haotian Ye, Xiaoyu Chen, Liwei Wang, Simon Shaolei Du
https://proceedings.mlr.press/v202/ye23a.html
ICML 2023
Generalization in Reinforcement Learning (RL) aims to train an agent during training that generalizes to the target environment. In this work, we first point out that RL generalization is fundamentally different from the generalization in supervised learning, and fine-tuning on the target environment is necessary for good test performance. Therefore, we seek to answer the following question: how much can we expect pre-training over training environments to be helpful for efficient and effective fine-tuning? On one hand, we give a surprising result showing that asymptotically, the improvement from pre-training is at most a constant factor. On the other hand, we show that pre-training can be indeed helpful in the non-asymptotic regime by designing a policy collection-elimination (PCE) algorithm and proving a distribution-dependent regret bound that is independent of the state-action space. We hope our theoretical results can provide insight towards understanding pre-training and generalization in RL.
https://proceedings.mlr.press/v202/ye23b.html
https://proceedings.mlr.press/v202/ye23b/ye23b.pdf
https://openreview.net/forum?id=33fj5Ph3ot
Personalized Federated Learning with Inferred Collaboration Graphs
https://proceedings.mlr.press/v202/ye23b.html
Rui Ye, Zhenyang Ni, Fangzhao Wu, Siheng Chen, Yanfeng Wang
https://proceedings.mlr.press/v202/ye23b.html
ICML 2023
Personalized federated learning (FL) aims to collaboratively train a personalized model for each client. Previous methods do not adaptively determine who to collaborate at a fine-grained level, making them difficult to handle diverse data heterogeneity levels and those cases where malicious clients exist. To address this issue, our core idea is to learn a collaboration graph, which models the benefits from each pairwise collaboration and allocates appropriate collaboration strengths. Based on this, we propose a novel personalized FL algorithm, pFedGraph, which consists of two key modules: (1) inferring the collaboration graph based on pairwise model similarity and dataset size at server to promote fine-grained collaboration and (2) optimizing local model with the assistance of aggregated model at client to promote personalization. The advantage of pFedGraph is flexibly adaptive to diverse data heterogeneity levels and model poisoning attacks, as the proposed collaboration graph always pushes each client to collaborate more with similar and beneficial clients. Extensive experiments show that pFedGraph consistently outperforms the other $14$ baseline methods across various heterogeneity levels and multiple cases where malicious clients exist. Code will be available at https://github.com/MediaBrain-SJTU/pFedGraph.
https://proceedings.mlr.press/v202/ye23c.html
https://proceedings.mlr.press/v202/ye23c/ye23c.pdf
https://openreview.net/forum?id=AXer5BvRn1
Compositional Exemplars for In-context Learning
https://proceedings.mlr.press/v202/ye23c.html
Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, Lingpeng Kong
https://proceedings.mlr.press/v202/ye23c.html
ICML 2023
Large pretrained language models (LMs) have shown impressive In-Context Learning (ICL) ability, where the model learns to do an unseen task simply by conditioning on a prompt consisting of input-output examples as demonstration, without any parameter updates. The performance of ICL is highly dominated by the quality of the selected in-context examples. However, previous selection methods are mostly based on simple heuristics, leading to sub-optimal performance. In this work, we systematically formulate in-context example selection as a subset selection problem, and optimize it in an end-to-end fashion. We propose CEIL (Compositional Exemplars for In-context Learning), which is instantiated by Determinantal Point Processes (DPPs) to model the interaction between the given input and in-context examples, and optimized through carefully-designed contrastive learning to obtain preference from LMs. We validate CEIL on 12 classification and generation datasets from 7 distinct NLP tasks, including sentiment analysis, phraphrase detection, natural language inference, commonsense reasoning, open-domain question answering, code generation and semantic parsing. Extensive experiments demonstrate the effectiveness, transferability, compositionality of CEIL, shedding new lights on in-context leaning. Our code is released at https://github.com/HKUNLP/icl-ceil.
https://proceedings.mlr.press/v202/ye23d.html
https://proceedings.mlr.press/v202/ye23d/ye23d.pdf
https://openreview.net/forum?id=9xMuDDbWIW
Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes
https://proceedings.mlr.press/v202/ye23d.html
Chenlu Ye, Wei Xiong, Quanquan Gu, Tong Zhang
https://proceedings.mlr.press/v202/ye23d.html
ICML 2023
Despite the significant interest and progress in reinforcement learning (RL) problems with adversarial corruption, current works are either confined to the linear setting or lead to an undesired $\tilde{\mathcal O}(\sqrt{T}\zeta)$ regret bound, where $T$ is the number of rounds and $\zeta$ is the total amount of corruption. In this paper, we consider contextual bandits with general function approximation and propose a computationally efficient algorithm to achieve a regret of $\tilde{\mathcal O}(\sqrt{T}+\zeta)$. The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandits (He et al., 2022) and a new weighted estimator of uncertainty for the general function class. In contrast to the existing analysis for the sum of uncertainty that is heavily based on the linear structure, we develop a novel technique to control the sum of weighted uncertainty, thus establishing the final regret bound. We then generalize our algorithm to the episodic MDP and first achieve an additive dependence on the corruption level $\zeta$ in the scenario of general function approximation. Notably, our algorithms achieve regret bounds that either nearly match the lower bound or improve the performance of existing methods for all the corruption levels in both known and unknown $\zeta$ cases.
https://proceedings.mlr.press/v202/ye23e.html
https://proceedings.mlr.press/v202/ye23e/ye23e.pdf
https://openreview.net/forum?id=tX7ajV69wt
GNN&GBDT-Guided Fast Optimizing Framework for Large-scale Integer Programming
https://proceedings.mlr.press/v202/ye23e.html
Huigen Ye, Hua Xu, Hongyan Wang, Chengming Wang, Yu Jiang
https://proceedings.mlr.press/v202/ye23e.html
ICML 2023
The latest two-stage optimization framework based on graph neural network (GNN) and large neighborhood search (LNS) is the most popular framework in solving large-scale integer programs (IPs). However, the framework can not effectively use the embedding spatial information in GNN and still highly relies on large-scale solvers in LNS, resulting in the scale of IP being limited by the ability of the current solver and performance bottlenecks. To handle these issues, this paper presents a GNN&GBDT-guided fast optimizing framework for large-scale IPs that only uses a small-scale optimizer to solve large-scale IPs efficiently. Specifically, the proposed framework can be divided into three stages: Multi-task GNN Embedding to generate the embedding space, GBDT Prediction to effectively use the embedding spatial information, and Neighborhood Optimization to solve large-scale problems fast using the small-scale optimizer. Extensive experiments show that the proposed framework can solve IPs with millions of scales and surpass SCIP and Gurobi in the specified wall-clock time using only a small-scale optimizer with 30% of the problem size. It also shows that the proposed framework can save 99% of running time in achieving the same solution quality as SCIP, which verifies the effectiveness and efficiency of the proposed framework in solving large-scale IPs.
https://proceedings.mlr.press/v202/ye23f.html
https://proceedings.mlr.press/v202/ye23f/ye23f.pdf
https://openreview.net/forum?id=cHJ1VuZorx
FedDisco: Federated Learning with Discrepancy-Aware Collaboration
https://proceedings.mlr.press/v202/ye23f.html
Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, Yanfeng Wang
https://proceedings.mlr.press/v202/ye23f.html
ICML 2023
This work considers the category distribution heterogeneity in federated learning. This issue is due to biased labeling preferences at multiple clients and is a typical setting of data heterogeneity. To alleviate this issue, most previous works consider either regularizing local models or fine-tuning the global model, while they ignore the adjustment of aggregation weights and simply assign weights based on the dataset size. However, based on our empirical observations and theoretical analysis, we find that the dataset size is not optimal and the discrepancy between local and global category distributions could be a beneficial and complementary indicator for determining aggregation weights. We thus propose a novel aggregation method, Federated Learning with Discrepancy-Aware Collaboration (FedDisco), whose aggregation weights not only involve both the dataset size and the discrepancy value, but also contribute to a tighter theoretical upper bound of the optimization error. FedDisco can promote utility and modularity in a communication- and computation-efficient way. Extensive experiments show that our FedDisco outperforms several state-of-the-art methods and can be easily incorporated with many existing methods to further enhance the performance. Our code will be available at https://github.com/MediaBrain-SJTU/FedDisco.
https://proceedings.mlr.press/v202/ye23g.html
https://proceedings.mlr.press/v202/ye23g/ye23g.pdf
https://openreview.net/forum?id=Brn6oCUVK5
Towards Quantum Machine Learning for Constrained Combinatorial Optimization: a Quantum QAP Solver
https://proceedings.mlr.press/v202/ye23g.html
Xinyu Ye, Ge Yan, Junchi Yan
https://proceedings.mlr.press/v202/ye23g.html
ICML 2023
Combinatorial optimization (CO) on the graph is a crucial but challenging research topic. Recent quantum algorithms provide a new perspective for solving CO problems and have the potential to demonstrate quantum advantage. Quantum Approximate Optimization Algorithm (QAOA) is a well-known quantum heuristic for CO constructed by a parametric quantum circuit. However, QAOA is originally designed for unconstrained problems and the circuit parameters and solutions are jointly solved with time-consuming iterations. In this paper, we propose a novel quantum neural network (QNN) for learning CO problems in a supervised manner to achieve better and faster results. We focus on the Quadratic Assignment Problem (QAP) with matching constraints and the node permutation invariance property. To this end, a quantum neural network called QAP-QNN is devised to translate the QAP into a constrained vertex classification task. Moreover, we study two QAP tasks: Graph Matching and Traveling Salesman Problem on TorchQauntum simulators, and empirically show the effectiveness of our approach.
https://proceedings.mlr.press/v202/yeche23a.html
https://proceedings.mlr.press/v202/yeche23a/yeche23a.pdf
https://openreview.net/forum?id=zZyYTpB3S7
Temporal Label Smoothing for Early Event Prediction
https://proceedings.mlr.press/v202/yeche23a.html
Hugo Yèche, Alizée Pace, Gunnar Ratsch, Rita Kuznetsova
https://proceedings.mlr.press/v202/yeche23a.html
ICML 2023
Models that can predict the occurrence of events ahead of time with low false-alarm rates are critical to the acceptance of decision support systems in the medical community. This challenging task is typically treated as a simple binary classification, ignoring temporal dependencies between samples, whereas we propose to exploit this structure. We first introduce a common theoretical framework unifying dynamic survival analysis and early event prediction. Following an analysis of objectives from both fields, we propose Temporal Label Smoothing (TLS), a simpler, yet best-performing method that preserves prediction monotonicity over time. By focusing the objective on areas with a stronger predictive signal, TLS improves performance over all baselines on two large-scale benchmark tasks. Gains are particularly notable along clinically relevant measures, such as event recall at low false-alarm rates. TLS reduces the number of missed events by up to a factor of two over previously used approaches in early event prediction.
https://proceedings.mlr.press/v202/yehezkel-rohekar23a.html
https://proceedings.mlr.press/v202/yehezkel-rohekar23a/yehezkel-rohekar23a.pdf
https://openreview.net/forum?id=cnGgsXpf2H
From Temporal to Contemporaneous Iterative Causal Discovery in the Presence of Latent Confounders
https://proceedings.mlr.press/v202/yehezkel-rohekar23a.html
Raanan Yehezkel Rohekar, Shami Nisimov, Yaniv Gurwicz, Gal Novik
https://proceedings.mlr.press/v202/yehezkel-rohekar23a.html
ICML 2023
We present a constraint-based algorithm for learning causal structures from observational time-series data, in the presence of latent confounders. We assume a discrete-time, stationary structural vector autoregressive process, with both temporal and contemporaneous causal relations. One may ask if temporal and contemporaneous relations should be treated differently. The presented algorithm gradually refines a causal graph by learning long-term temporal relations before short-term ones, where contemporaneous relations are learned last. This ordering of causal relations to be learnt leads to a reduction in the required number of statistical tests. We validate this reduction empirically and demonstrate that it leads to higher accuracy for synthetic data and more plausible causal graphs for real-world data compared to state-of-the-art algorithms.
https://proceedings.mlr.press/v202/yi23a.html
https://proceedings.mlr.press/v202/yi23a/yi23a.pdf
https://openreview.net/forum?id=FjOB0g7iRf
Doubly Adversarial Federated Bandits
https://proceedings.mlr.press/v202/yi23a.html
Jialin Yi, Milan Vojnovic
https://proceedings.mlr.press/v202/yi23a.html
ICML 2023
We study a new non-stochastic federated multiarmed bandit problem with multiple agents collaborating via a communication network. The losses of the arms are assigned by an oblivious adversary that specifies the loss of each arm not only for each time step but also for each agent, which we call doubly adversarial. In this setting, different agents may choose the same arm in the same time step but observe different feedback. The goal of each agent is to find a globally best arm in hindsight that has the lowest cumulative loss averaged over all agents, which necessities the communication among agents. We provide regret lower bounds for any federated bandit algorithm under different settings, when agents have access to full-information feedback, or the bandit feedback. For the bandit feedback setting, we propose a near-optimal federated bandit algorithm called FEDEXP3. Our algorithm gives a positive answer to an open question proposed in (Cesa-Bianchi et al., 2016): FEDEXP3 can guarantee a sub-linear regret without exchanging sequences of selected arm identities or loss sequences among agents. We also provide numerical evaluations of our algorithm to validate our theoretical results and demonstrate its effectiveness on synthetic and real-world datasets.
https://proceedings.mlr.press/v202/yi23b.html
https://proceedings.mlr.press/v202/yi23b/yi23b.pdf
https://openreview.net/forum?id=Ix8o1xIX6y
Online Prototype Alignment for Few-shot Policy Transfer
https://proceedings.mlr.press/v202/yi23b.html
Qi Yi, Rui Zhang, Shaohui Peng, Jiaming Guo, Yunkai Gao, Kaizhao Yuan, Ruizhi Chen, Siming Lan, Xing Hu, Zidong Du, Xishan Zhang, Qi Guo, Yunji Chen
https://proceedings.mlr.press/v202/yi23b.html
ICML 2023
Domain adaptation in RL mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, in this paper, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.
https://proceedings.mlr.press/v202/yi23c.html
https://proceedings.mlr.press/v202/yi23c/yi23c.pdf
https://openreview.net/forum?id=hV4quLiR4c
MonoFlow: Rethinking Divergence GANs via the Perspective of Wasserstein Gradient Flows
https://proceedings.mlr.press/v202/yi23c.html
Mingxuan Yi, Zhanxing Zhu, Song Liu
https://proceedings.mlr.press/v202/yi23c.html
ICML 2023
The conventional understanding of adversarial training in generative adversarial networks (GANs) is that the discriminator is trained to estimate a divergence, and the generator learns to minimize this divergence. We argue that despite the fact that many variants of GANs were developed following this paradigm, the current theoretical understanding of GANs and their practical algorithms are inconsistent. In this paper, we leverage Wasserstein gradient flows which characterize the evolution of particles in the sample space, to gain theoretical insights and algorithmic inspiration of GANs. We introduce a unified generative modeling framework – MonoFlow: the particle evolution is rescaled via a monotonically increasing mapping of the log density ratio. Under our framework, adversarial training can be viewed as a procedure first obtaining MonoFlow’s vector field via training the discriminator and the generator learns to draw the particle flow defined by the corresponding vector field. We also reveal the fundamental difference between variational divergence minimization and adversarial training. This analysis helps us to identify what types of generator loss functions can lead to the successful training of GANs and suggest that GANs may have more loss designs beyond the literature (e.g., non-saturated loss), as long as they realize MonoFlow. Consistent empirical studies are included to validate the effectiveness of our framework.
https://proceedings.mlr.press/v202/yim23a.html
https://proceedings.mlr.press/v202/yim23a/yim23a.pdf
https://openreview.net/forum?id=m8OUBymxwv
SE(3) diffusion model with application to protein backbone generation
https://proceedings.mlr.press/v202/yim23a.html
Jason Yim, Brian L. Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, Tommi Jaakkola
https://proceedings.mlr.press/v202/yim23a.html
ICML 2023
The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry. In this line of work, a diffusion model over rigid bodies in 3D (referred to as frames) has shown success in generating novel, functional protein backbones that have not been observed in nature. However, there exists no principled methodological framework for diffusion on SE(3), the space of orientation preserving rigid motions in R3, that operates on frames and confers the group invariance. We address these shortcomings by developing theoretical foundations of SE(3) invariant diffusion models on multiple frames followed by a novel framework, FrameDiff, for estimating the SE(3) equivariant score over multiple frames. We apply FrameDiff on monomer backbone generation and find it can generate designable monomers up to 500 amino acids without relying on a pretrained protein structure prediction network that has been integral to previous methods. We find our samples are capable of generalizing beyond any known protein structure.
https://proceedings.mlr.press/v202/yin23a.html
https://proceedings.mlr.press/v202/yin23a/yin23a.pdf
https://openreview.net/forum?id=IIqhdgLUio
CoCo: A Coupled Contrastive Framework for Unsupervised Domain Adaptive Graph Classification
https://proceedings.mlr.press/v202/yin23a.html
Nan Yin, Li Shen, Mengzhu Wang, Long Lan, Zeyu Ma, Chong Chen, Xian-Sheng Hua, Xiao Luo
https://proceedings.mlr.press/v202/yin23a.html
ICML 2023
Although graph neural networks (GNNs) have achieved impressive achievements in graph classification, they often need abundant task-specific labels, which could be extensively costly to acquire. A credible solution is to explore additional labeled graphs to enhance unsupervised learning on the target domain. However, how to apply GNNs to domain adaptation remains unsolved owing to the insufficient exploration of graph topology and the significant domain discrepancy. In this paper, we propose Coupled Contrastive Graph Representation Learning (CoCo), which extracts the topological information from coupled learning branches and reduces the domain discrepancy with coupled contrastive learning. CoCo contains a graph convolutional network branch and a hierarchical graph kernel network branch, which explore graph topology in implicit and explicit manners. Besides, we incorporate coupled branches into a holistic multi-view contrastive learning framework, which not only incorporates graph representations learned from complementary views for enhanced understanding, but also encourages the similarity between cross-domain example pairs with the same semantics for domain alignment. Extensive experiments on popular datasets show that our CoCo outperforms these competing baselines in different settings generally.
https://proceedings.mlr.press/v202/ying23a.html
https://proceedings.mlr.press/v202/ying23a/ying23a.pdf
https://openreview.net/forum?id=USiX9gmGRx
Adaptive Estimation of Graphical Models under Total Positivity
https://proceedings.mlr.press/v202/ying23a.html
Jiaxi Ying, José Vinı́cius De Miranda Cardoso, Daniel P. Palomar
https://proceedings.mlr.press/v202/ying23a.html
ICML 2023
We consider the problem of estimating (diagonally dominant) M-matrices as precision matrices in Gaussian graphical models. Such models have shown interesting properties, e.g., the maximum likelihood estimator exists with as little as two observations in the case of M-matrices, and exists even with one observation in the case of diagonally dominant M-matrices. We propose an adaptive multiple-stage estimation method, which refines the estimate by solving a weighted $\ell_1$-regularized problem in each stage. We further design a unified framework based on gradient projection method to solve the regularized problem, equipped with different projections to handle the constraints of M-matrices and diagonally dominant M-matrices. Theoretical analysis of the estimation error is established. The proposed method outperforms state-of-the-art methods in estimating precision matrices and identifying graph edges, as evidenced by synthetic and financial time-series data sets.
https://proceedings.mlr.press/v202/yoo23a.html
https://proceedings.mlr.press/v202/yoo23a/yoo23a.pdf
https://openreview.net/forum?id=1ZqavMwxxx
Improving Visual Prompt Tuning for Self-supervised Vision Transformers
https://proceedings.mlr.press/v202/yoo23a.html
Seungryong Yoo, Eunji Kim, Dahuin Jung, Jungbeom Lee, Sungroh Yoon
https://proceedings.mlr.press/v202/yoo23a.html
ICML 2023
Visual Prompt Tuning (VPT) is an effective tuning method for adapting pretrained Vision Transformers (ViTs) to downstream tasks. It leverages extra learnable tokens, known as prompts, which steer the frozen pretrained ViTs. Although VPT has demonstrated its applicability with supervised vision transformers, it often underperforms with self-supervised ones. Through empirical observations, we deduce that the effectiveness of VPT hinges largely on the ViT blocks with which the prompt tokens interact. Specifically, VPT shows improved performance on image classification tasks for MAE and MoCo v3 when the prompt tokens are inserted into later blocks rather than the first block. These observations suggest that there exists an optimal location of blocks for the insertion of prompt tokens. Unfortunately, identifying the optimal blocks for prompts within each self-supervised ViT for diverse future scenarios is a costly process. To mitigate this problem, we propose a simple yet effective method that learns a gate for each ViT block to adjust its intervention into the prompt tokens. With our method, prompt tokens are selectively influenced by blocks that require steering for task adaptation. Our method outperforms VPT variants in FGVC and VTAB image classification and ADE20K semantic segmentation. The code is available at https://github.com/ryongithub/GatedPromptTuning.
https://proceedings.mlr.press/v202/yoo23b.html
https://proceedings.mlr.press/v202/yoo23b/yoo23b.pdf
https://openreview.net/forum?id=fwT1ivw3Px
End-to-End Multi-Object Detection with a Regularized Mixture Model
https://proceedings.mlr.press/v202/yoo23b.html
Jaeyoung Yoo, Hojun Lee, Seunghyeon Seo, Inseop Chung, Nojun Kwak
https://proceedings.mlr.press/v202/yoo23b.html
ICML 2023
Recent end-to-end multi-object detectors simplify the inference pipeline by removing hand-crafted processes such as non-maximum suppression (NMS). However, during training, they still heavily rely on heuristics and hand-crafted processes which deteriorate the reliability of the predicted confidence score. In this paper, we propose a novel framework to train an end-to-end multi-object detector consisting of only two terms: negative log-likelihood (NLL) and a regularization term. In doing so, the multi-object detection problem is treated as density estimation of the ground truth bounding boxes utilizing a regularized mixture density model. The proposed end-to-end multi-object Detection with a Regularized Mixture Model (D-RMM) is trained by minimizing the NLL with the proposed regularization term, maximum component maximization (MCM) loss, preventing duplicate predictions. Our method reduces the heuristics of the training process and improves the reliability of the predicted confidence score. Moreover, our D-RMM outperforms the previous end-to-end detectors on MS COCO dataset. Code is available at https://github.com/lhj815/D-RMM.
https://proceedings.mlr.press/v202/yoon23a.html
https://proceedings.mlr.press/v202/yoon23a/yoon23a.pdf
https://openreview.net/forum?id=gxzAtub4sb
EM-Network: Oracle Guided Self-distillation for Sequence Learning
https://proceedings.mlr.press/v202/yoon23a.html
Ji Won Yoon, Sunghwan Ahn, Hyeonseung Lee, Minchan Kim, Seok Min Kim, Nam Soo Kim
https://proceedings.mlr.press/v202/yoon23a.html
ICML 2023
We introduce EM-Network, a novel self-distillation approach that effectively leverages target information for supervised sequence-to-sequence (seq2seq) learning. In contrast to conventional methods, it is trained with oracle guidance, which is derived from the target sequence. Since the oracle guidance compactly represents the target-side context that can assist the sequence model in solving the task, the EM-Network achieves a better prediction compared to using only the source input. To allow the sequence model to inherit the promising capability of the EM-Network, we propose a new self-distillation strategy, where the original sequence model can benefit from the knowledge of the EM-Network in a one-stage manner. We conduct comprehensive experiments on two types of seq2seq models: connectionist temporal classification (CTC) for speech recognition and attention-based encoder-decoder (AED) for machine translation. Experimental results demonstrate that the EM-Network significantly advances the current state-of-the-art approaches, improving over the best prior work on speech recognition and establishing state-of-the-art performance on WMT’14 and IWSLT’14.
https://proceedings.mlr.press/v202/yoon23b.html
https://proceedings.mlr.press/v202/yoon23b/yoon23b.pdf
https://openreview.net/forum?id=xFXjnK8Ksl
Continual Learners are Incremental Model Generalizers
https://proceedings.mlr.press/v202/yoon23b.html
Jaehong Yoon, Sung Ju Hwang, Yue Cao
https://proceedings.mlr.press/v202/yoon23b.html
ICML 2023
Motivated by the efficiency and rapid convergence of pre-trained models for solving downstream tasks, this paper extensively studies the impact of Continual Learning (CL) models as pre-trainers. We find that, in both supervised and unsupervised CL, the transfer quality of representations does not show a noticeable degradation of fine-tuning performance but rather increases gradually. This is because CL models can learn improved task-general features when easily forgetting task-specific knowledge. Based on this observation, we suggest a new unsupervised CL framework with masked modeling, which aims to capture fluent task-generic representation during training. Furthermore, we propose a new fine-tuning scheme, GLobal Attention Discretization (GLAD), that preserves rich task-generic representation during solving downstream tasks. The model fine-tuned with GLAD achieves competitive performance and can also be used as a good pre-trained model itself. We believe this paper breaks the barriers between pre-training and fine-tuning steps and leads to a sustainable learning framework in which the continual learner incrementally improves model generalization, yielding better transfer to unseen tasks.
https://proceedings.mlr.press/v202/yoon23c.html
https://proceedings.mlr.press/v202/yoon23c/yoon23c.pdf
https://openreview.net/forum?id=ICWVUy4fhR
An Investigation into Pre-Training Object-Centric Representations for Reinforcement Learning
https://proceedings.mlr.press/v202/yoon23c.html
Jaesik Yoon, Yi-Fu Wu, Heechul Bae, Sungjin Ahn
https://proceedings.mlr.press/v202/yoon23c.html
ICML 2023
Unsupervised object-centric representation (OCR) learning has recently drawn attention as a new paradigm of visual representation. This is because of its potential of being an effective pre-training technique for various downstream tasks in terms of sample efficiency, systematic generalization, and reasoning. Although image-based reinforcement learning (RL) is one of the most important and thus frequently mentioned such downstream tasks, the benefit in RL has surprisingly not been investigated systematically thus far. Instead, most of the evaluations have focused on rather indirect metrics such as segmentation quality and object property prediction accuracy. In this paper, we investigate the effectiveness of OCR pre-training for image-based reinforcement learning via empirical experiments. For systematic evaluation, we introduce a simple object-centric visual RL benchmark and conduct experiments to answer questions such as "Does OCR pre-training improve performance on object-centric tasks?" and "Can OCR pre-training help with out-of-distribution generalization?". Our results provide empirical evidence for valuable insights into the effectiveness of OCR pre-training for RL and the potential limitations of its use in certain scenarios. Additionally, this study also examines the critical aspects of incorporating OCR pre-training in RL, including performance in a visually complex environment and the appropriate pooling layer to aggregate the object representations.
https://proceedings.mlr.press/v202/yoon23d.html
https://proceedings.mlr.press/v202/yoon23d/yoon23d.pdf
https://openreview.net/forum?id=SpA7YFu02k
Graph Generative Model for Benchmarking Graph Neural Networks
https://proceedings.mlr.press/v202/yoon23d.html
Minji Yoon, Yue Wu, John Palowitch, Bryan Perozzi, Russ Salakhutdinov
https://proceedings.mlr.press/v202/yoon23d.html
ICML 2023
As the field of Graph Neural Networks (GNN) continues to grow, it experiences a corresponding increase in the need for large, real-world datasets to train and test new GNN models on challenging, realistic problems. Unfortunately, such graph datasets are often generated from online, highly privacy-restricted ecosystems, which makes research and development on these datasets hard, if not impossible. This greatly reduces the amount of benchmark graphs available to researchers, causing the field to rely only on a handful of publicly-available datasets. To address this problem, we introduce a novel graph generative model, Computation Graph Transformer (CGT) that learns and reproduces the distribution of real-world graphs in a privacy-controlled way. More specifically, CGT (1) generates effective benchmark graphs on which GNNs show similar task performance as on the source graphs, (2) scales to process large-scale graphs, (3) incorporates off-the-shelf privacy modules to guarantee end-user privacy of the generated graph. Extensive experiments across a vast body of graph generative models show that only our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
https://proceedings.mlr.press/v202/you23a.html
https://proceedings.mlr.press/v202/you23a/you23a.pdf
https://openreview.net/forum?id=4XjFCzngFq
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels
https://proceedings.mlr.press/v202/you23a.html
Xuchen You, Shouvanik Chakrabarti, Boyang Chen, Xiaodi Wu
https://proceedings.mlr.press/v202/you23a.html
ICML 2023
A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers. It can be used for supervised learning when combined with classical gradient-based optimizers. Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood. Inspired by the success of the neural tangent kernels (NTKs) in probing into the dynamics of classical neural networks, a recent line of works proposes to study over-parameterized QNNs by examining a quantum version of tangent kernels. In this work, we study the dynamics of QNNs and show that contrary to popular belief it is qualitatively different from that of any kernel regression: due to the unitarity of quantum operations, there is a non-negligible deviation from the tangent kernel regression derived at the random initialization. As a result of the deviation, we prove the at-most sublinear convergence for QNNs with Pauli measurements, which is beyond the explanatory power of any kernel regression dynamics. We then present the actual dynamics of QNNs in the limit of over-parameterization. The new dynamics capture the change of convergence rate during training and implies that the range of measurements is crucial to the fast QNN convergence.
https://proceedings.mlr.press/v202/younes23a.html
https://proceedings.mlr.press/v202/younes23a/younes23a.pdf
https://openreview.net/forum?id=9mGbxC4QiI
Entropy-driven Unsupervised Keypoint Representation Learning in Videos
https://proceedings.mlr.press/v202/younes23a.html
Ali Younes, Simone Schaub-Meyer, Georgia Chalvatzaki
https://proceedings.mlr.press/v202/younes23a.html
ICML 2023
Extracting informative representations from videos is fundamental for effectively learning various downstream tasks. We present a novel approach for unsupervised learning of meaningful representations from videos, leveraging the concept of image spatial entropy (ISE) that quantifies the per-pixel information in an image. We argue that local entropy of pixel neighborhoods and their temporal evolution create valuable intrinsic supervisory signals for learning prominent features. Building on this idea, we abstract visual features into a concise representation of keypoints that act as dynamic information transmitters, and design a deep learning model that learns, purely unsupervised, spatially and temporally consistent representations directly from video frames. Two original information-theoretic losses, computed from local entropy, guide our model to discover consistent keypoint representations; a loss that maximizes the spatial information covered by the keypoints and a loss that optimizes the keypoints’ information transportation over time. We compare our keypoint representation to strong baselines for various downstream tasks, e.g., learning object dynamics. Our empirical results show superior performance for our information-driven keypoints that resolve challenges like attendance to static and dynamic objects or objects abruptly entering and leaving the scene.
https://proceedings.mlr.press/v202/young23a.html
https://proceedings.mlr.press/v202/young23a/young23a.pdf
https://openreview.net/forum?id=Vue1ulwlPD
The Benefits of Model-Based Generalization in Reinforcement Learning
https://proceedings.mlr.press/v202/young23a.html
Kenny John Young, Aditya Ramesh, Louis Kirsch, Jürgen Schmidhuber
https://proceedings.mlr.press/v202/young23a.html
ICML 2023
Model-Based Reinforcement Learning (RL) is widely believed to have the potential to improve sample efficiency by allowing an agent to synthesize large amounts of imagined experience. Experience Replay (ER) can be considered a simple kind of model, which has proved effective at improving the stability and efficiency of deep RL. In principle, a learned parametric model could improve on ER by generalizing from real experience to augment the dataset with additional plausible experience. However, given that learned value functions can also generalize, it is not immediately obvious why model generalization should be better. Here, we provide theoretical and empirical insight into when, and how, we can expect data generated by a learned model to be useful. First, we provide a simple theorem motivating how learning a model as an intermediate step can narrow down the set of possible value functions more than learning a value function directly from data using the Bellman equation. Second, we provide an illustrative example showing empirically how a similar effect occurs in a more concrete setting with neural network function approximation. Finally, we provide extensive experiments showing the benefit of model-based learning for online RL in environments with combinatorial complexity, but factored structure that allows a learned model to generalize. In these experiments, we take care to control for other factors in order to isolate, insofar as possible, the benefit of using experience generated by a learned model relative to ER alone.
https://proceedings.mlr.press/v202/yu23a.html
https://proceedings.mlr.press/v202/yu23a/yu23a.pdf
https://openreview.net/forum?id=RRaaxzxoAa
COLA: Orchestrating Error Coding and Learning for Robust Neural Network Inference Against Hardware Defects
https://proceedings.mlr.press/v202/yu23a.html
Anlan Yu, Ning Lyu, Jieming Yin, Zhiyuan Yan, Wujie Wen
https://proceedings.mlr.press/v202/yu23a.html
ICML 2023
Error correcting output codes (ECOCs) have been proposed to improve the robustness of deep neural networks (DNNs) against hardware defects of DNN hardware accelerators. Unfortunately, existing efforts suffer from drawbacks that would greatly impact their practicality: 1) robust accuracy (with defects) improvement at the cost of degraded clean accuracy (without defects); 2) no guarantee on better robust or clean accuracy using stronger ECOCs. In this paper, we first shed light on the connection between these drawbacks and error correlation, and then propose a novel comprehensive error decorrelation framework, namely COLA. Specifically, we propose to reduce inner layer feature error correlation by 1) adopting a separated architecture, where the last portions of the paths to all output nodes are separated, and 2) orthogonalizing weights in common DNN layers so that the intermediate features are orthogonal with each other. We also propose a regularization technique based on total correlation to mitigate overall error correlation at the outputs. The effectiveness of COLA is first analyzed theoretically, and then evaluated experimentally, e.g. up to 6.7% clean accuracy improvement compared with the original DNNs and up to 40% robust accuracy improvement compared to the state-of-the-art ECOC-enhanced DNNs.
https://proceedings.mlr.press/v202/yu23b.html
https://proceedings.mlr.press/v202/yu23b/yu23b.pdf
https://openreview.net/forum?id=qAW0AD6qYA
Delving into Noisy Label Detection with Clean Data
https://proceedings.mlr.press/v202/yu23b.html
Chenglin Yu, Xinsong Ma, Weiwei Liu
https://proceedings.mlr.press/v202/yu23b.html
ICML 2023
A critical element of learning with noisy labels is noisy label detection. Notably, numerous previous works assume that no source of labels can be clean in a noisy label detection context. In this work, we relax this assumption and assume that a small subset of the training data is clean, which enables substantial noisy label detection performance gains. Specifically, we propose a novel framework that leverages clean data by framing the problem of noisy label detection with clean data as a multiple hypothesis testing problem. Moreover, we propose BHN, a simple yet effective approach for noisy label detection that integrates the Benjamini-Hochberg (BH) procedure into deep neural networks. BHN achieves $\textit{state-of-the-art}$ performance and outperforms baselines by $\textbf{28.48}$% in terms of false discovery rate (FDR) and by $\textbf{18.99}$% in terms of F1 on CIFAR-10. Extensive ablation studies further demonstrate the superiority of BHN. Our code is available at https://github.com/ChenglinYu/BHN.
https://proceedings.mlr.press/v202/yu23c.html
https://proceedings.mlr.press/v202/yu23c/yu23c.pdf
https://openreview.net/forum?id=22hmxMop1O
Bag of Tricks for Training Data Extraction from Language Models
https://proceedings.mlr.press/v202/yu23c.html
Weichen Yu, Tianyu Pang, Qian Liu, Chao Du, Bingyi Kang, Yan Huang, Min Lin, Shuicheng Yan
https://proceedings.mlr.press/v202/yu23c.html
ICML 2023
With the advance of language models, privacy protection is receiving more attention. Training data extraction is therefore of great importance, as it can serve as a potential tool to assess privacy leakage. However, due to the difficulty of this task, most of the existing methods are proof-of-concept and still not effective enough. In this paper, we investigate and benchmark tricks for improving training data extraction using a publicly available dataset. Because most existing extraction methods use a pipeline of generating-then-ranking, i.e., generating text candidates as potential training data and then ranking them based on specific criteria, our research focuses on the tricks for both text generation (e.g., sampling strategy) and text ranking (e.g., token-level criteria). The experimental results show that several previously overlooked tricks can be crucial to the success of training data extraction. Based on the GPT-Neo 1.3B evaluation results, our proposed tricks outperform the baseline by a large margin in most cases, providing a much stronger baseline for future research. The code is available at https://github.com/weichen-yu/LM-Extraction.
https://proceedings.mlr.press/v202/yu23d.html
https://proceedings.mlr.press/v202/yu23d/yu23d.pdf
https://openreview.net/forum?id=jsPzZ4Q6ne
Discover-Then-Rank Unlabeled Support Vectors in the Dual Space for Multi-Class Active Learning
https://proceedings.mlr.press/v202/yu23d.html
Dayou Yu, Weishi Shi, Qi Yu
https://proceedings.mlr.press/v202/yu23d.html
ICML 2023
We propose to approach active learning (AL) from a novel perspective of discovering and then ranking potential support vectors by leveraging the key properties of the dual space of a sparse kernel max-margin predictor. We theoretically analyze the change of a hinge loss in the dual form and provide both the upper and lower bounds that are deeply connected to the key geometric properties induced by the dual space, which then help us identify various types of important data samples for AL. These bounds inform the design of a novel sampling strategy that leverages class-wise evidence as a key vehicle, formed through an affine combination of dual variables and kernel evaluation. We construct two distinct types of sampling functions, including discovery and ranking. The former focuses on samples with low total evidence from all classes, which signifies their potential to support exploration; the latter exploits the current decision boundary to identify the most conflicting regions for sampling, aiming to further refine the decision boundary. These two functions, which are complementary to each other, are automatically arranged into a two-phase active sampling process that starts with the discovery and then transitions to the ranking of data points to most effectively balance exploration and exploitation. Experiments on various real-world data demonstrate the state-of-the-art AL performance achieved by our model.
https://proceedings.mlr.press/v202/yu23e.html
https://proceedings.mlr.press/v202/yu23e/yu23e.pdf
https://openreview.net/forum?id=es2ykIhttu
Long-Term Rhythmic Video Soundtracker
https://proceedings.mlr.press/v202/yu23e.html
Jiashuo Yu, Yaohui Wang, Xinyuan Chen, Xiao Sun, Yu Qiao
https://proceedings.mlr.press/v202/yu23e.html
ICML 2023
We consider the problem of generating musical soundtracks in sync with rhythmic visual cues. Most existing works rely on pre-defined music representations, leading to the incompetence of generative flexibility and complexity. Other methods directly generating video-conditioned waveforms suffer from limited scenarios, short lengths, and unstable generation quality. To this end, we present Long-Term Rhythmic Video Soundtracker (LORIS), a novel framework to synthesize long-term conditional waveforms. Specifically, our framework consists of a latent conditional diffusion probabilistic model to perform waveform synthesis. Furthermore, a series of context-aware conditioning encoders are proposed to take temporal information into consideration for a long-term generation. Notably, we extend our model’s applicability from dances to multiple sports scenarios such as floor exercise and figure skating. To perform comprehensive evaluations, we establish a benchmark for rhythmic video soundtracks including the pre-processed dataset, improved evaluation metrics, and robust generative baselines. Extensive experiments show that our model generates long-term soundtracks with state-of-the-art musical quality and rhythmic correspondence. Codes are available at https://github.com/OpenGVLab/LORIS.
https://proceedings.mlr.press/v202/yu23f.html
https://proceedings.mlr.press/v202/yu23f/yu23f.pdf
https://openreview.net/forum?id=fdbDDRhPGi
Adversarial Parameter Attack on Deep Neural Networks
https://proceedings.mlr.press/v202/yu23f.html
Lijia Yu, Yihan Wang, Xiao-Shan Gao
https://proceedings.mlr.press/v202/yu23f.html
ICML 2023
The parameter perturbation attack is a safety threat to deep learning, where small parameter perturbations are made such that the attacked network gives wrong or desired labels of the adversary to specified inputs. However, such attacks could be detected by the user, because the accuracy of the attacked network will reduce and the network cannot work normally. To make the attack more stealthy, in this paper, the adversarial parameter attack is proposed, in which small perturbations to the parameters of the network are made such that the accuracy of the attacked network does not decrease much, but its robustness against adversarial example attacks becomes much lower. As a consequence, the attacked network performs normally on standard samples, but is much more vulnerable to adversarial attacks. The existence of nearly perfect adversarial parameters under $L_\infty$ norm and $L_0$ norm is proved under reasonable conditions. Algorithms are given which can be used to produce high quality adversarial parameters for the commonly used networks trained with various robust training methods, in that the robustness of the attacked networks decreases significantly when they are evaluated using various adversarial attack methods.
https://proceedings.mlr.press/v202/yu23g.html
https://proceedings.mlr.press/v202/yu23g/yu23g.pdf
https://openreview.net/forum?id=zdmbZl0ia6
CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models
https://proceedings.mlr.press/v202/yu23g.html
Zhiyuan Yu, Yuhao Wu, Ning Zhang, Chenguang Wang, Yevgeniy Vorobeychik, Chaowei Xiao
https://proceedings.mlr.press/v202/yu23g.html
ICML 2023
Recent advances in large language models (LMs) have facilitated their ability to synthesize programming code. However, they have also raised concerns about intellectual property (IP) rights violations. Despite the significance of this issue, it has been relatively less explored. In this paper, we aim to bridge the gap by presenting CodeIPPrompt, a platform for automatic evaluation of the extent to which code language models may reproduce licensed programs. It comprises two key components: prompts constructed from a licensed code database to elicit LMs to generate IP-violating code, and a measurement tool to evaluate the extent of IP violation of code LMs. We conducted an extensive evaluation of existing open-source code LMs and commercial products and revealed the prevalence of IP violations in all these models. We further identified that the root cause is the substantial proportion of training corpus subject to restrictive licenses, resulting from both intentional inclusion and inconsistent license practice in the real world. To address this issue, we also explored potential mitigation strategies, including fine-tuning and dynamic token filtering. Our study provides a testbed for evaluating the IP violation issues of the existing code generation platforms and stresses the need for a better mitigation strategy.
https://proceedings.mlr.press/v202/yu23h.html
https://proceedings.mlr.press/v202/yu23h/yu23h.pdf
https://openreview.net/forum?id=sag7iLqPvC
SeedGNN: Graph Neural Network for Supervised Seeded Graph Matching
https://proceedings.mlr.press/v202/yu23h.html
Liren Yu, Jiaming Xu, Xiaojun Lin
https://proceedings.mlr.press/v202/yu23h.html
ICML 2023
There is a growing interest in designing Graph Neural Networks (GNNs) for seeded graph matching, which aims to match two unlabeled graphs using only topological information and a small set of seed nodes. However, most previous GNNs for this task use a semi-supervised approach, which requires a large number of seeds and cannot learn knowledge that is transferable to unseen graphs. In contrast, this paper proposes a new supervised approach that can learn from a training set how to match unseen graphs with only a few seeds. Our SeedGNN architecture incorporates several novel designs, inspired by theoretical studies of seeded graph matching: 1) it can learn to compute and use witness-like information from different hops, in a way that can be generalized to graphs of different sizes; 2) it can use easily-matched node-pairs as new seeds to improve the matching in subsequent layers. We evaluate SeedGNN on synthetic and real-world graphs and demonstrate significant performance improvements over both non-learning and learning algorithms in the existing literature. Furthermore, our experiments confirm that the knowledge learned by SeedGNN from training graphs can be generalized to test graphs of different sizes and categories.
https://proceedings.mlr.press/v202/yu23i.html
https://proceedings.mlr.press/v202/yu23i/yu23i.pdf
https://openreview.net/forum?id=pKNQRJZwnV
Efficient and Equivariant Graph Networks for Predicting Quantum Hamiltonian
https://proceedings.mlr.press/v202/yu23i.html
Haiyang Yu, Zhao Xu, Xiaofeng Qian, Xiaoning Qian, Shuiwang Ji
https://proceedings.mlr.press/v202/yu23i.html
ICML 2023
We consider the prediction of the Hamiltonian matrix, which finds use in quantum chemistry and condensed matter physics. Efficiency and equivariance are two important, but conflicting factors. In this work, we propose a SE(3)-equivariant network, named QHNet, that achieves efficiency and equivariance. Our key advance lies at the innovative design of QHNet architecture, which not only obeys the underlying symmetries, but also enables the reduction of number of tensor products by 92%. In addition, QHNet prevents the exponential growth of channel dimension when more atom types are involved. We perform experiments on MD17 datasets, including four molecular systems. Experimental results show that our QHNet can achieve comparable performance to the state of the art methods at a significantly faster speed. Besides, our QHNet consumes 50% less memory due to its streamlined architecture. Our code is publicly available as part of the AIRS library (https://github.com/divelab/AIRS).
https://proceedings.mlr.press/v202/yu23j.html
https://proceedings.mlr.press/v202/yu23j/yu23j.pdf
https://openreview.net/forum?id=ksE1uptwtM
On the Global Convergence of Risk-Averse Policy Gradient Methods with Expected Conditional Risk Measures
https://proceedings.mlr.press/v202/yu23j.html
Xian Yu, Lei Ying
https://proceedings.mlr.press/v202/yu23j.html
ICML 2023
Risk-sensitive reinforcement learning (RL) has become a popular tool to control the risk of uncertain outcomes and ensure reliable performance in various sequential decision-making problems. While policy gradient methods have been developed for risk-sensitive RL, it remains unclear if these methods enjoy the same global convergence guarantees as in the risk-neutral case. In this paper, we consider a class of dynamic time-consistent risk measures, called Expected Conditional Risk Measures (ECRMs), and derive policy gradient updates for ECRM-based objective functions. Under both constrained direct parameterization and unconstrained softmax parameterization, we provide global convergence and iteration complexities of the corresponding risk-averse policy gradient algorithms. We further test risk-averse variants of REINFORCE and actor-critic algorithms to demonstrate the efficacy of our method and the importance of risk control.
https://proceedings.mlr.press/v202/yu23k.html
https://proceedings.mlr.press/v202/yu23k/yu23k.pdf
https://openreview.net/forum?id=f6I3ZehFmu
Actor-Critic Alignment for Offline-to-Online Reinforcement Learning
https://proceedings.mlr.press/v202/yu23k.html
Zishun Yu, Xinhua Zhang
https://proceedings.mlr.press/v202/yu23k.html
ICML 2023
Deep offline reinforcement learning has recently demonstrated considerable promises in leveraging offline datasets, providing high-quality models that significantly reduce the online interactions required for fine-tuning. However, such a benefit is often diminished due to the marked state-action distribution shift, which causes significant bootstrap error and wipes out the good initial policy. Existing solutions resort to constraining the policy shift or balancing the sample replay based on their online-ness. However, they require online estimation of distribution divergence or density ratio. To avoid such complications, we propose deviating from existing actor-critic approaches that directly transfer the state-action value functions. Instead, we post-process them by aligning with the offline learned policy, so that the $Q$-values for actions outside the offline policy are also tamed. As a result, the online fine-tuning can be simply performed as in the standard actor-critic algorithms. We show empirically that the proposed method improves the performance of the fine-tuned robotic agents on various simulated tasks.
https://proceedings.mlr.press/v202/yu23l.html
https://proceedings.mlr.press/v202/yu23l/yu23l.pdf
https://openreview.net/forum?id=nrSM4XmF5k
Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning
https://proceedings.mlr.press/v202/yu23l.html
Zhongzhi Yu, Yang Zhang, Kaizhi Qian, Cheng Wan, Yonggan Fu, Yongan Zhang, Yingyan Celine Lin
https://proceedings.mlr.press/v202/yu23l.html
ICML 2023
Despite the impressive performance recently achieved by automatic speech recognition (ASR), we observe two primary challenges that hinder its broader applications: (1) The difficulty of introducing scalability into the model to support more languages with limited training, inference, and storage overhead; (2) The low-resource adaptation ability that enables effective low-resource adaptation while avoiding over fitting and catastrophic forgetting issues. Inspired by recent findings, we hypothesize that we can address the above challenges with modules widely shared across languages. To this end, we propose an ASR framework, dubbed Master-ASR, that, for the first time, simultaneously achieves strong multilingual scalability and low-resource adaptation ability thanks to its modularize-then-assemble strategy. Specifically, Master-ASR learns a small set of generalizable sub-modules and adaptively assembles them for different languages to reduce the multilingual overhead and enable effective knowledge transfer for low-resource adaptation. Extensive experiments and visualizations demonstrate that Master-ASR can effectively discover language similarity and improve multilingual and low-resource ASR performance over state-of-the-art (SOTA) methods, e.g., under multilingual-ASR, our framework achieves a 0.13∼2.41 lower character error rate (CER) with 30% smaller inference overhead over SOTA solutions on multilingual ASR and a comparable CER with nearly 100 times fewer trainable parameters over SOTA solutions on low-resource tuning, respectively.
https://proceedings.mlr.press/v202/yuan23a.html
https://proceedings.mlr.press/v202/yuan23a/yuan23a.pdf
https://openreview.net/forum?id=odCqtXjSgB
Coordinate Descent Methods for Fractional Minimization
https://proceedings.mlr.press/v202/yuan23a.html
Ganzhao Yuan
https://proceedings.mlr.press/v202/yuan23a.html
ICML 2023
We consider a class of structured fractional minimization problems, in which the numerator part of the objective is the sum of a differentiable convex function and a convex non-smooth function, while the denominator part is a convex or concave function. This problem is difficult to solve since it is non-convex. By exploiting the structure of the problem, we propose two Coordinate Descent (CD) methods for solving this problem. The proposed methods iteratively solve a one-dimensional subproblem globally, and they are guaranteed to converge to coordinate-wise stationary points. In the case of a convex denominator, under a weak locally bounded non-convexity condition, we prove that the optimality of coordinate-wise stationary point is stronger than that of the standard critical point and directional point. Under additional suitable conditions, CD methods converge Q-linearly to coordinate-wise stationary points. In the case of a concave denominator, we show that any critical point is a global minimum, and CD methods converge to the global minimum with a sublinear convergence rate. We demonstrate the applicability of the proposed methods to some machine learning and signal processing models. Our experiments on real-world data have shown that our method significantly and consistently outperforms existing methods in terms of accuracy.
https://proceedings.mlr.press/v202/yuan23b.html
https://proceedings.mlr.press/v202/yuan23b/yuan23b.pdf
https://openreview.net/forum?id=PVTjHXANRB
On the Power of Foundation Models
https://proceedings.mlr.press/v202/yuan23b.html
Yang Yuan
https://proceedings.mlr.press/v202/yuan23b.html
ICML 2023
With infinitely many high-quality data points, infinite computational power, an infinitely large foundation model with a perfect training algorithm and guaranteed zero generalization error on the pretext task, can the model be used for everything? This question cannot be answered by the existing theory of representation, optimization or generalization, because the issues they mainly investigate are assumed to be nonexistent here. In this paper, we show that category theory provides powerful machinery to answer this question. We have proved three results. The first one limits the power of prompt-based learning, saying that the model can solve a downstream task with prompts if and only if the task is representable. The second one says fine tuning does not have this limit, as a foundation model with the minimum required power (up to symmetry) can theoretically solve downstream tasks for the category defined by pretext task, with fine tuning and enough resources. Our final result can be seen as a new type of generalization theorem, showing that the foundation model can generate unseen objects from the target category (e.g., images) using the structural information from the source category (e.g., texts). Along the way, we provide a categorical framework for supervised and self-supervised learning, which might be of independent interest.
https://proceedings.mlr.press/v202/yuan23c.html
https://proceedings.mlr.press/v202/yuan23c/yuan23c.pdf
https://openreview.net/forum?id=UyJJ1pnb0y
Automatic Intrinsic Reward Shaping for Exploration in Deep Reinforcement Learning
https://proceedings.mlr.press/v202/yuan23c.html
Mingqi Yuan, Bo Li, Xin Jin, Wenjun Zeng
https://proceedings.mlr.press/v202/yuan23c.html
ICML 2023
We present AIRS: Automatic Intrinsic Reward Shaping that intelligently and adaptively provides high-quality intrinsic rewards to enhance exploration in reinforcement learning (RL). More specifically, AIRS selects shaping function from a predefined set based on the estimated task return in real-time, providing reliable exploration incentives and alleviating the biased objective problem. Moreover, we develop an intrinsic reward toolkit to provide efficient and reliable implementations of diverse intrinsic reward approaches. We test AIRS on various tasks of MiniGrid, Procgen, and DeepMind Control Suite. Extensive simulation demonstrates that AIRS can outperform the benchmarking schemes and achieve superior performance with simple architecture.
https://proceedings.mlr.press/v202/yun23a.html
https://proceedings.mlr.press/v202/yun23a/yun23a.pdf
https://openreview.net/forum?id=Kw7g8iUNAw
Traversing Between Modes in Function Space for Fast Ensembling
https://proceedings.mlr.press/v202/yun23a.html
Eunggu Yun, Hyungi Lee, Giung Nam, Juho Lee
https://proceedings.mlr.press/v202/yun23a.html
ICML 2023
Deep ensemble is a simple yet powerful way to improve the performance of deep neural networks. Under this motivation, recent works on mode connectivity have shown that parameters of ensembles are connected by low-loss subspaces, and one can efficiently collect ensemble parameters in those subspaces. While this provides a way to efficiently train ensembles, for inference, multiple forward passes should still be executed using all the ensemble parameters, which often becomes a serious bottleneck for real-world deployment. In this work, we propose a novel framework to reduce such costs. Given a low-loss subspace connecting two modes of a neural network, we build an additional neural network that predicts the output of the original neural network evaluated at a certain point in the low-loss subspace. The additional neural network, which we call a “bridge”, is a lightweight network that takes minimal features from the original network and predicts outputs for the low-loss subspace without forward passes through the original network. We empirically demonstrate that we can indeed train such bridge networks and significantly reduce inference costs with the help of bridge networks.
https://proceedings.mlr.press/v202/zaffran23a.html
https://proceedings.mlr.press/v202/zaffran23a/zaffran23a.pdf
https://openreview.net/forum?id=BUv0BLrosh
Conformal Prediction with Missing Values
https://proceedings.mlr.press/v202/zaffran23a.html
Margaux Zaffran, Aymeric Dieuleveut, Julie Josse, Yaniv Romano
https://proceedings.mlr.press/v202/zaffran23a.html
ICML 2023
Conformal prediction is a theoretically grounded framework for constructing predictive intervals. We study conformal prediction with missing values in the covariates – a setting that brings new challenges to uncertainty quantification. We first show that the marginal coverage guarantee of conformal prediction holds on imputed data for any missingness distribution and almost all imputation functions. However, we emphasize that the average coverage varies depending on the pattern of missing values: conformal methods tend to construct prediction intervals that under-cover the response conditionally to some missing patterns. This motivates our novel generalized conformalized quantile regression framework, missing data augmentation, which yields prediction intervals that are valid conditionally to the patterns of missing values, despite their exponential number. We then show that a universally consistent quantile regression algorithm trained on the imputed data is Bayes optimal for the pinball risk, thus achieving valid coverage conditionally to any given data point. Moreover, we examine the case of a linear model, which demonstrates the importance of our proposal in overcoming the heteroskedasticity induced by missing values. Using synthetic and data from critical care, we corroborate our theory and report improved performance of our methods.