abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/zandieh23a.html
https://proceedings.mlr.press/v202/zandieh23a/zandieh23a.pdf
https://openreview.net/forum?id=7405Po1JTk
KDEformer: Accelerating Transformers via Kernel Density Estimation
https://proceedings.mlr.press/v202/zandieh23a.html
Amir Zandieh, Insu Han, Majid Daliri, Amin Karbasi
https://proceedings.mlr.press/v202/zandieh23a.html
ICML 2023
Dot-product attention mechanism plays a crucial role in modern deep architectures (e.g., Transformer) for sequence modeling, however, naïve exact computation of this model incurs quadratic time and memory complexities in sequence length, hindering the training of long-sequence models. Critical bottlenecks are due to the computation of partition functions in the denominator of softmax function as well as the multiplication of the softmax matrix with the matrix of values. Our key observation is that the former can be reduced to a variant of the kernel density estimation (KDE) problem, and an efficient KDE solver can be further utilized to accelerate the latter via subsampling-based fast matrix products. Our proposed KDEformer can approximate the attention in sub-quadratic time with provable spectral norm bounds, while all prior results merely provide entry-wise error bounds. Empirically, we verify that KDEformer outperforms other attention approximations in terms of accuracy, memory, and arithmetic operations on various pre-trained models. For instance, on BigGAN image generation we achieve better generative scores than the exact computation with over 4× speedup. For ImageNet classification with T2T-ViT, KDEformer shows over 18× speedup while the accuracy drop is less than 0.5%.
https://proceedings.mlr.press/v202/zanella-beguelin23a.html
https://proceedings.mlr.press/v202/zanella-beguelin23a/zanella-beguelin23a.pdf
https://openreview.net/forum?id=PwsvGnamYD
Bayesian Estimation of Differential Privacy
https://proceedings.mlr.press/v202/zanella-beguelin23a.html
Santiago Zanella-Beguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, Boris Köpf, Daniel Jones
https://proceedings.mlr.press/v202/zanella-beguelin23a.html
ICML 2023
Algorithms such as Differentially Private SGD enable training machine learning models with formal privacy guarantees. However, because these guarantees hold with respect to unrealistic adversaries, the protection afforded against practical attacks is typically much better. An emerging strand of work empirically estimates the protection afforded by differentially private training as a confidence interval for the privacy budget $\hat{\varepsilon}$ spent with respect to specific threat models. Existing approaches derive confidence intervals for $\hat{\varepsilon}$ from confidence intervals for false positive and false negative rates of membership inference attacks, which requires training an impractically large number of models to get intervals that can be acted upon. We propose a novel, more efficient Bayesian approach that brings privacy estimates within the reach of practitioners. Our approach reduces sample size by computing a posterior for $\hat{\varepsilon}$ (not just a confidence interval) from the joint posterior of the false positive and false negative rates of membership inference attacks. We implement an end-to-end system for privacy estimation that integrates our approach and state-of-the-art membership inference attacks, and evaluate it on text and vision classification tasks. For the same number of samples, we see a reduction in interval width of up to 40% compared to prior work.
https://proceedings.mlr.press/v202/zanette23a.html
https://proceedings.mlr.press/v202/zanette23a/zanette23a.pdf
https://openreview.net/forum?id=2gkWFSkdnW
When is Realizability Sufficient for Off-Policy Reinforcement Learning?
https://proceedings.mlr.press/v202/zanette23a.html
Andrea Zanette
https://proceedings.mlr.press/v202/zanette23a.html
ICML 2023
Understanding when reinforcement learning algorithms can make successful off-policy predictions—and when the may fail to do so–remains an open problem. Typically, model-free algorithms for reinforcement learning are analyzed under a condition called Bellman completeness when they operate off-policy with function approximation, unless additional conditions are met. However, Bellman completeness is a requirement that is much stronger than realizability and that is deemed to be too strong to hold in practice. In this work, we relax this structural assumption and analyze the statistical complexity of off-policy reinforcement learning when only realizability holds for the prescribed function class. We establish finite-sample guarantees for off-policy reinforcement learning that are free of the approximation error term known as inherent Bellman error, and that depend on the interplay of three factors. The first two are well known: they are the metric entropy of the function class and the concentrability coefficient that represents the cost of learning off-policy. The third factor is new, and it measures the violation of Bellman completeness, namely the mis-alignment between the chosen function class and its image through the Bellman operator. Our analysis directly applies to the solution found by temporal difference algorithms when they converge.
https://proceedings.mlr.press/v202/zeighami23a.html
https://proceedings.mlr.press/v202/zeighami23a/zeighami23a.pdf
https://openreview.net/forum?id=4hefw3y2VK
On Distribution Dependent Sub-Logarithmic Query Time of Learned Indexing
https://proceedings.mlr.press/v202/zeighami23a.html
Sepanta Zeighami, Cyrus Shahabi
https://proceedings.mlr.press/v202/zeighami23a.html
ICML 2023
A fundamental problem in data management is to find the elements in an array that match a query. Recently, learned indexes are being extensively used to solve this problem, where they learn a model to predict the location of the items in the array. They are empirically shown to outperform non-learned methods (e.g., B-trees or binary search that answer queries in $O(\log n)$ time) by orders of magnitude. However, success of learned indexes has not been theoretically justified. Only existing attempt shows the same query time of $O(\log n)$, but with a constant factor improvement in space complexity over non-learned methods, under some assumptions on data distribution. In this paper, we significantly strengthen this result, showing that under mild assumptions on data distribution, and the same space complexity as non-learned methods, learned indexes can answer queries in $O(\log\log n)$ expected query time. We also show that allowing for slightly larger but still near-linear space overhead, a learned index can achieve $O(1)$ expected query time. Our results theoretically prove learned indexes are orders of magnitude faster than non-learned methods, theoretically grounding their empirical success.
https://proceedings.mlr.press/v202/zenati23a.html
https://proceedings.mlr.press/v202/zenati23a/zenati23a.pdf
https://openreview.net/forum?id=E3Ny4RnbiT
Sequential Counterfactual Risk Minimization
https://proceedings.mlr.press/v202/zenati23a.html
Houssam Zenati, Eustache Diemert, Matthieu Martin, Julien Mairal, Pierre Gaillard
https://proceedings.mlr.press/v202/zenati23a.html
ICML 2023
Counterfactual Risk Minimization (CRM) is a framework for dealing with the logged bandit feedback problem, where the goal is to improve a logging policy using offline data. In this paper, we explore the case where it is possible to deploy learned policies multiple times and acquire new data. We extend the CRM principle and its theory to this scenario, which we call "Sequential Counterfactual Risk Minimization (SCRM)." We introduce a novel counterfactual estimator and identify conditions that can improve the performance of CRM in terms of excess risk and regret rates, by using an analysis similar to restart strategies in accelerated optimization methods. We also provide an empirical evaluation of our method in both discrete and continuous action settings, and demonstrate the benefits of multiple deployments of CRM.
https://proceedings.mlr.press/v202/zeng23a.html
https://proceedings.mlr.press/v202/zeng23a/zeng23a.pdf
https://openreview.net/forum?id=MmYoDC7dH9
LookupFFN: Making Transformers Compute-lite for CPU inference
https://proceedings.mlr.press/v202/zeng23a.html
Zhanpeng Zeng, Michael Davies, Pranav Pulijala, Karthikeyan Sankaralingam, Vikas Singh
https://proceedings.mlr.press/v202/zeng23a.html
ICML 2023
While GPU clusters are the de facto choice for training large deep neural network (DNN) models today, several reasons including ease of workflow, security and cost have led to efforts investigating whether CPUs may be viable for inference in routine use in many sectors of the industry. But the imbalance between the compute capabilities of GPUs and CPUs is huge. Motivated by these considerations, we study a module which is a workhorse within modern DNN architectures, GEMM based Feed Forward Networks (FFNs), and assess the extent to which it can be made compute- (or FLOP-) lite. Specifically, we propose an alternative formulation (we call it LookupFFN) to GEMM based FFNs inspired by the recent studies of using Locality Sensitive Hashing (LSH) to approximate FFNs. Our formulation recasts most essential operations as a memory look-up, leveraging the trade-off between the two resources on any platform: compute and memory (since CPUs offer it in abundance). For RoBERTa language model pretraining, our formulation achieves similar performance compared to GEMM based FFNs, while dramatically reducing the required FLOP. Our development is complemented with a detailed hardware profiling of strategies that will maximize efficiency – not just on contemporary hardware but on products that will be offered in the near/medium term future. Code is avaiable at https://github.com/mlpen/LookupFFN.
https://proceedings.mlr.press/v202/zeng23b.html
https://proceedings.mlr.press/v202/zeng23b/zeng23b.pdf
https://openreview.net/forum?id=6pFhUYPlvF
Attribute-Efficient PAC Learning of Low-Degree Polynomial Threshold Functions with Nasty Noise
https://proceedings.mlr.press/v202/zeng23b.html
Shiwei Zeng, Jie Shen
https://proceedings.mlr.press/v202/zeng23b.html
ICML 2023
The concept class of low-degree polynomial threshold functions (PTFs) plays a fundamental role in machine learning. In this paper, we study PAC learning of $K$-sparse degree-$d$ PTFs on $\mathbb{R}^n$, where any such concept depends only on $K$ out of $n$ attributes of the input. Our main contribution is a new algorithm that runs in time $({nd}/{\epsilon})^{O(d)}$ and under the Gaussian marginal distribution, PAC learns the class up to error rate $\epsilon$ with $O(\frac{K^{4d}}{\epsilon^{2d}} \cdot \log^{5d} n)$ samples even when an $\eta \leq O(\epsilon^d)$ fraction of them are corrupted by the nasty noise of Bshouty et al. (2002), possibly the strongest corruption model. Prior to this work, attribute-efficient robust algorithms are established only for the special case of sparse homogeneous halfspaces. Our key ingredients are: 1) a structural result that translates the attribute sparsity to a sparsity pattern of the Chow vector under the basis of Hermite polynomials, and 2) a novel attribute-efficient robust Chow vector estimation algorithm which uses exclusively a restricted Frobenius norm to either certify a good approximation or to validate a sparsity-induced degree-$2d$ polynomial as a filter to detect corrupted samples.
https://proceedings.mlr.press/v202/zeng23c.html
https://proceedings.mlr.press/v202/zeng23c/zeng23c.pdf
https://openreview.net/forum?id=vTSyiXwoPK
Generative Graph Dictionary Learning
https://proceedings.mlr.press/v202/zeng23c.html
Zhichen Zeng, Ruike Zhu, Yinglong Xia, Hanqing Zeng, Hanghang Tong
https://proceedings.mlr.press/v202/zeng23c.html
ICML 2023
Dictionary learning, which approximates data samples by a set of shared atoms, is a fundamental task in representation learning. However, dictionary learning over graphs, namely graph dictionary learning (GDL), is much more challenging than vectorial data as graphs lie in disparate metric spaces. The sparse literature on GDL formulates the problem from the reconstructive view and often learns linear graph embeddings with a high computational cost. In this paper, we propose a Fused Gromov-Wasserstein (FGW) Mixture Model named FraMe to address the GDL problem from the generative view. Equipped with the graph generation function based on the radial basis function kernel and FGW distance, FraMe generates nonlinear embedding spaces, which, as we theoretically proved, provide a good approximation of the original graph spaces. A fast solution is further proposed on top of the expectation-maximization algorithm with guaranteed convergence. Extensive experiments demonstrate the effectiveness of the obtained node and graph embeddings, and our algorithm achieves significant improvements over the state-of-the-art methods.
https://proceedings.mlr.press/v202/zhai23a.html
https://proceedings.mlr.press/v202/zhai23a/zhai23a.pdf
https://openreview.net/forum?id=LL8gz8FHxH
Stabilizing Transformer Training by Preventing Attention Entropy Collapse
https://proceedings.mlr.press/v202/zhai23a.html
Shuangfei Zhai, Tatiana Likhomanenko, Etai Littwin, Dan Busbridge, Jason Ramapuram, Yizhe Zhang, Jiatao Gu, Joshua M. Susskind
https://proceedings.mlr.press/v202/zhai23a.html
ICML 2023
Training stability is of great importance to Transformers. In this work, we investigate the training dynamics of Transformers by examining the evolution of the attention layers. In particular, we track the attention entropy for each attention head during the course of training, which is a proxy for model sharpness. We identify a common pattern across different architectures and tasks, where low attention entropy is accompanied by high training instability, which can take the form of oscillating loss or divergence. We denote the pathologically low attention entropy, corresponding to highly concentrated attention scores, as $\textit{entropy collapse}$. As a remedy, we propose $\sigma$Reparam, a simple and efficient solution where we reparametrize all linear layers with spectral normalization and an additional learned scalar. We demonstrate that $\sigma$Reparam successfully prevents entropy collapse in the attention layers, promoting more stable training. Additionally, we prove a tight lower bound of the attention entropy, which decreases exponentially fast with the spectral norm of the attention logits, providing additional motivation for our approach. We conduct experiments with $\sigma$Reparam on image classification, image self-supervised learning, machine translation, speech recognition, and language modeling tasks. We show that $\sigma$Reparam provides stability and robustness with respect to the choice of hyperparameters, going so far as enabling training (a) a Vision Transformer to competitive performance without warmup, weight decay, layer normalization or adaptive optimizers; (b) deep architectures in machine translation and (c) speech recognition to competitive performance without warmup and adaptive optimizers. Code is available at https://github.com/apple/ml-sigma-reparam.
https://proceedings.mlr.press/v202/zhang23a.html
https://proceedings.mlr.press/v202/zhang23a/zhang23a.pdf
https://openreview.net/forum?id=LtSMEVi6eB
Offline Learning in Markov Games with General Function Approximation
https://proceedings.mlr.press/v202/zhang23a.html
Yuheng Zhang, Yu Bai, Nan Jiang
https://proceedings.mlr.press/v202/zhang23a.html
ICML 2023
We study offline multi-agent reinforcement learning (RL) in Markov games, where the goal is to learn an approximate equilibrium—such as Nash equilibrium and (Coarse) Correlated Equilibrium—from an offline dataset pre-collected from the game. Existing works consider relatively restricted tabular or linear models and handle each equilibria separately. In this work, we provide the first framework for sample-efficient offline learning in Markov games under general function approximation, handling all 3 equilibria in a unified manner. By using Bellman-consistent pessimism, we obtain interval estimation for policies’ returns, and use both the upper and the lower bounds to obtain a relaxation on the gap of a candidate policy, which becomes our optimization objective. Our results generalize prior works and provide several additional insights. Importantly, we require a data coverage condition that improves over the recently proposed “unilateral concentrability”. Our condition allows selective coverage of deviation policies that optimally trade-off between their greediness (as approximate best responses) and coverage, and we show scenarios where this leads to significantly better guarantees. As a new connection, we also show how our algorithmic framework can subsume seemingly different solution concepts designed for the special case of two-player zero-sum games.
https://proceedings.mlr.press/v202/zhang23b.html
https://proceedings.mlr.press/v202/zhang23b/zhang23b.pdf
https://openreview.net/forum?id=85G1qtikHO
Learning useful representations for shifting tasks and distributions
https://proceedings.mlr.press/v202/zhang23b.html
Jianyu Zhang, Leon Bottou
https://proceedings.mlr.press/v202/zhang23b.html
ICML 2023
Does the dominant approach to learn representations (as a side effect of optimizing an expected cost for a single training distribution) remain a good approach when we are dealing with multiple distributions? Our thesis is that such scenarios are better served by representations that are richer than those obtained with a single optimization episode. We support this thesis with simple theoretical arguments and with experiments utilizing an apparently näive ensembling technique: concatenating the representations obtained from multiple training episodes using the same data, model, algorithm, and hyper-parameters, but different random seeds. These independently trained networks perform similarly. Yet, in a number of scenarios involving new distributions, the concatenated representation performs substantially better than an equivalently sized network trained with a single training run. This proves that the representations constructed by multiple training episodes are in fact different. Although their concatenation carries little additional information about the training task under the training distribution, it becomes substantially more informative when tasks or distributions change. Meanwhile, a single training episode is unlikely to yield such a redundant representation because the optimization process has no reason to accumulate features that do not incrementally improve the training performance.
https://proceedings.mlr.press/v202/zhang23c.html
https://proceedings.mlr.press/v202/zhang23c/zhang23c.pdf
https://openreview.net/forum?id=QqdRISwihn
Nonparametric Iterative Machine Teaching
https://proceedings.mlr.press/v202/zhang23c.html
Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
https://proceedings.mlr.press/v202/zhang23c.html
ICML 2023
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model. However, existing IMT algorithms are solely based on parameterized families of target models. They mainly focus on convergence in the parameter space, resulting in difficulty when the target models are defined to be functions without dependency on parameters. To address such a limitation, we study a more general task – Nonparametric Iterative Machine Teaching (NIMT), which aims to teach nonparametric target models to learners in an iterative fashion. Unlike parametric IMT that merely operates in the parameter space, we cast NIMT as a functional optimization problem in the function space. To solve it, we propose both random and greedy functional teaching algorithms. We obtain the iterative teaching dimension (ITD) of the random teaching algorithm under proper assumptions, which serves as a uniform upper bound of ITD in NIMT. Further, the greedy teaching algorithm has a significantly lower ITD, which reaches a tighter upper bound of ITD in NIMT. Finally, we verify the correctness of our theoretical findings with extensive experiments in nonparametric scenarios.
https://proceedings.mlr.press/v202/zhang23d.html
https://proceedings.mlr.press/v202/zhang23d/zhang23d.pdf
https://openreview.net/forum?id=hvEwJ3xYxx
Matrix Estimation for Individual Fairness
https://proceedings.mlr.press/v202/zhang23d.html
Cindy Zhang, Sarah Huiyi Cen, Devavrat Shah
https://proceedings.mlr.press/v202/zhang23d.html
ICML 2023
In recent years, multiple notions of algorithmic fairness have arisen. One such notion is individual fairness (IF), which requires that individuals who are similar receive similar treatment. In parallel, matrix estimation (ME) has emerged as a natural paradigm for handling noisy data with missing values. In this work, we connect the two concepts. We show that pre-processing data using ME can improve an algorithm’s IF without sacrificing performance. Specifically, we show that using a popular ME method known as singular value thresholding (SVT) to pre-process the data provides a strong IF guarantee under appropriate conditions. We then show that, under analogous conditions, SVT pre-processing also yields estimates that are consistent and approximately minimax optimal. As such, the ME pre-processing step does not, under the stated conditions, increase the prediction error of the base algorithm, i.e., does not impose a fairness-performance trade-off. We verify these results on synthetic and real data.
https://proceedings.mlr.press/v202/zhang23e.html
https://proceedings.mlr.press/v202/zhang23e/zhang23e.pdf
https://openreview.net/forum?id=BfVkbfJGW4
Graph Contrastive Backdoor Attacks
https://proceedings.mlr.press/v202/zhang23e.html
Hangfan Zhang, Jinghui Chen, Lu Lin, Jinyuan Jia, Dinghao Wu
https://proceedings.mlr.press/v202/zhang23e.html
ICML 2023
Graph Contrastive Learning (GCL) has attracted considerable interest due to its impressive node representation learning capability. Despite the wide application of GCL techniques, little attention has been paid to the security of GCL. In this paper, we systematically study the vulnerability of GCL in the presence of malicious backdoor adversaries. In particular, we propose GCBA, the first backdoor attack for graph contrastive learning. GCBA incorporates three attacks: poisoning, crafting, and natural backdoor, each targeting one stage of the GCL pipeline. We formulate our attacks as optimization problems and solve them with a novel discrete optimization technique to overcome the discrete nature of graph-structured data. By extensively evaluating GCBA on multiple datasets and GCL methods, we show that our attack can achieve high attack success rates while preserving stealthiness. We further consider potential countermeasures to our attack and conclude that existing defenses are insufficient to mitigate GCBA. We show that as a complex paradigm involving data and model republishing, GCL is vulnerable to backdoor attacks, and specifically designed defenses are needed to mitigate the backdoor attacks on GCL.
https://proceedings.mlr.press/v202/zhang23f.html
https://proceedings.mlr.press/v202/zhang23f/zhang23f.pdf
https://openreview.net/forum?id=CeZGwmsIIz
Effective Minkowski Dimension of Deep Nonparametric Regression: Function Approximation and Statistical Theories
https://proceedings.mlr.press/v202/zhang23f.html
Zixuan Zhang, Minshuo Chen, Mengdi Wang, Wenjing Liao, Tuo Zhao
https://proceedings.mlr.press/v202/zhang23f.html
ICML 2023
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to the intrinsic data structures. In real world applications, such an assumption of data lying exactly on a low dimensional manifold is stringent. This paper introduces a relaxed assumption that the input data are concentrated around a subset of $\mathbb{R}^d$ denoted by $\mathcal{S}$, and the intrinsic dimension of $\mathcal{S}$ can be characterized by a new complexity notation – effective Minkowski dimension. We prove that, the sample complexity of deep nonparametric regression only depends on the effective Minkowski dimension of $\mathcal{S}$ denoted by $p$. We further illustrate our theoretical findings by considering nonparametric regression with an anisotropic Gaussian random design $N(0,\Sigma)$, where $\Sigma$ is full rank. When the eigenvalues of $\Sigma$ have an exponential or polynomial decay, the effective Minkowski dimension of such an Gaussian random design is $p=\mathcal{O}(\sqrt{\log n})$ or $p=\mathcal{O}(n^\gamma)$, respectively, where $n$ is the sample size and $\gamma\in(0,1)$ is a small constant depending on the polynomial decay rate. Our theory shows that, when the manifold assumption does not hold, deep neural networks can still adapt to the effective Minkowski dimension of the data, and circumvent the curse of the ambient dimensionality for moderate sample sizes.
https://proceedings.mlr.press/v202/zhang23g.html
https://proceedings.mlr.press/v202/zhang23g/zhang23g.pdf
https://openreview.net/forum?id=ET6qkbzeOx
Tractable Control for Autoregressive Language Generation
https://proceedings.mlr.press/v202/zhang23g.html
Honghua Zhang, Meihua Dang, Nanyun Peng, Guy Van Den Broeck
https://proceedings.mlr.press/v202/zhang23g.html
ICML 2023
Despite the success of autoregressive large language models in text generation, it remains a major challenge to generate text that satisfies complex constraints: sampling from the conditional distribution ${\Pr}(\text{text} | \alpha)$ is intractable for even the simplest lexical constraints $\alpha$. To overcome this challenge, we propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models, which we refer to as GeLaTo (Generating Language with Tractable Constraints). To demonstrate the effectiveness of this framework, we use distilled hidden Markov models, where we can efficiently compute ${\Pr}(\text{text} | \alpha)$, to guide autoregressive generation from GPT2. GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation (e.g., CommonGen), beating various strong baselines by a large margin. Our work not only opens up new avenues for controlling large language models but also motivates the development of more expressive TPMs.
https://proceedings.mlr.press/v202/zhang23h.html
https://proceedings.mlr.press/v202/zhang23h/zhang23h.pdf
https://openreview.net/forum?id=JcNKs5pE8J
CataBEEM: Integrating Latent Interaction Categories in Node-wise Community Detection Models for Network Data
https://proceedings.mlr.press/v202/zhang23h.html
Yuhua Zhang, Walter H. Dempsey
https://proceedings.mlr.press/v202/zhang23h.html
ICML 2023
Community detection is a fundamental task in network analysis. Learning underlying network structures has brought deep insights into the understanding of complex systems. While many methods have focused on clustering nodes into blocks, few accounts for the fact that interactions may exhibit edge-level clustering, which we call categories. Real network data often arise via a series of interactions. Interactions in complex systems can often be clustered into different categories and node-level community structures that depend on the category. In this paper, we introduce a category-and-block edge exchangeable model (CataBEEM) to study interaction networks with joint latent interaction-level category and node-level community structures. In particular, the proposed method models the network from the interaction process perspective and allows the incorporation of prior knowledge from auxiliary interaction-wise information. We derive an efficient variational inference algorithm that can be applied to networks consisting of millions of interactions and provide the theoretical bound of the misspecification rate. We demonstrate the effectiveness of our method in various simulation settings and apply the method to TalkLife data, a large-scale online peer-to-peer support network. We show CataBEEM detects more temporally consistent community structures and has better predictions than other methods.
https://proceedings.mlr.press/v202/zhang23i.html
https://proceedings.mlr.press/v202/zhang23i/zhang23i.pdf
https://openreview.net/forum?id=2IkLprQjby
Rethink DARTS Search Space and Renovate a New Benchmark
https://proceedings.mlr.press/v202/zhang23i.html
Jiuling Zhang, Zhiming Ding
https://proceedings.mlr.press/v202/zhang23i.html
ICML 2023
DARTS search space (DSS) has become a canonical benchmark for NAS whereas some emerging works pointed out the issue of narrow accuracy range and claimed it would hurt the method ranking. We observe some recent studies already suffer from this issue that overshadows the meaning of scores. In this work, we first propose and orchestrate a suite of improvements to frame a larger and harder DSS, termed LHD, while retaining high efficiency in search. We step forward to renovate a LHD-based new benchmark, taking care of both discernibility and accessibility. Specifically, we re-implement twelve baselines and evaluate them across twelve conditions by combining two underexpolored influential factors: transductive robustness and discretization policy, to reasonably construct a benchmark upon multi-condition evaluation. Considering that the tabular benchmarks are always insufficient to adequately evaluate the methods of neural architecture search (NAS), our work can serve as a crucial basis for the future progress of NAS.
https://proceedings.mlr.press/v202/zhang23j.html
https://proceedings.mlr.press/v202/zhang23j/zhang23j.pdf
https://openreview.net/forum?id=VZK4LLr78f
Team Belief DAG: Generalizing the Sequence Form to Team Games for Fast Computation of Correlated Team Max-Min Equilibria via Regret Minimization
https://proceedings.mlr.press/v202/zhang23j.html
Brian Hu Zhang, Gabriele Farina, Tuomas Sandholm
https://proceedings.mlr.press/v202/zhang23j.html
ICML 2023
A classic result in the theory of extensive-form games asserts that the set of strategies available to any perfect-recall player is strategically equivalent to a low-dimensional convex polytope, called the sequence-form polytope. Online convex optimization tools operating on this polytope are the current state-of-the-art for computing several notions of equilibria in games, and have been crucial in landmark applications of computational game theory. However, when optimizing over the joint strategy space of a team of players, one cannot use the sequence form to obtain a strategically-equivalent convex description of the strategy set of the team. In this paper, we provide new complexity results on the computation of optimal strategies for teams, and propose a new representation, coined team belief DAG (TB-DAG), that describes team strategies as a convex set. The TB-DAG enjoys state-of-the-art parameterized complexity bounds, while at the same time enjoying the advantages of efficient regret minimization techniques. We show that TB-DAG can be exponentially smaller and can be computed exponentially faster than all other known representations, and that the converse is never true. Experimentally, we show that the TB-DAG, when paired with learning techniques, yields state of the art on a wide variety of benchmark team games.
https://proceedings.mlr.press/v202/zhang23k.html
https://proceedings.mlr.press/v202/zhang23k/zhang23k.pdf
https://openreview.net/forum?id=2Hp7U3k5Ph
A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weisfeiler-Lehman Tests
https://proceedings.mlr.press/v202/zhang23k.html
Bohang Zhang, Guhao Feng, Yiheng Du, Di He, Liwei Wang
https://proceedings.mlr.press/v202/zhang23k.html
ICML 2023
Recently, subgraph GNNs have emerged as an important direction for developing expressive graph neural networks (GNNs). While numerous architectures have been proposed, so far there is still a limited understanding of how various design paradigms differ in terms of expressive power, nor is it clear what design principle achieves maximal expressiveness with minimal architectural complexity. To address these fundamental questions, this paper conducts a systematic study of general node-based subgraph GNNs through the lens of Subgraph Weisfeiler-Lehman Tests (SWL). Our central result is to build a complete hierarchy of SWL with strictly growing expressivity. Concretely, we prove that any node-based subgraph GNN falls into one of the six SWL equivalence classes, among which $\mathsf{SSWL}$ achieves the maximal expressive power. We also study how these equivalence classes differ in terms of their practical expressiveness such as encoding graph distance and biconnectivity. In addition, we give a tight expressivity upper bound of all SWL algorithms by establishing a close relation with localized versions of WL and Folklore WL (FWL) tests. Overall, our results provide insights into the power of existing subgraph GNNs, guide the design of new architectures, and point out their limitations by revealing an inherent gap with the 2-FWL test. Finally, experiments demonstrate that $\mathsf{SSWL}$-inspired subgraph GNNs can significantly outperform prior architectures on multiple benchmarks despite great simplicity.
https://proceedings.mlr.press/v202/zhang23l.html
https://proceedings.mlr.press/v202/zhang23l/zhang23l.pdf
https://openreview.net/forum?id=TcMIK8Wx6e
Crafting Training Degradation Distribution for the Accuracy-Generalization Trade-off in Real-World Super-Resolution
https://proceedings.mlr.press/v202/zhang23l.html
Ruofan Zhang, Jinjin Gu, Haoyu Chen, Chao Dong, Yulun Zhang, Wenming Yang
https://proceedings.mlr.press/v202/zhang23l.html
ICML 2023
Super-resolution (SR) techniques designed for real-world applications commonly encounter two primary challenges: generalization performance and restoration accuracy. We demonstrate that when methods are trained using complex, large-range degradations to enhance generalization, a decline in accuracy is inevitable. However, since the degradation in a certain real-world applications typically exhibits a limited variation range, it becomes feasible to strike a trade-off between generalization performance and testing accuracy within this scope. In this work, we introduce a novel approach to craft training degradation distributions using a small set of reference images. Our strategy is founded upon the binned representation of the degradation space and the Frechet distance between degradation distributions. Our results indicate that the proposed technique significantly improves the performance of test images while preserving generalization capabilities in real-world applications.
https://proceedings.mlr.press/v202/zhang23m.html
https://proceedings.mlr.press/v202/zhang23m/zhang23m.pdf
https://openreview.net/forum?id=yWl0agiI0y
Prompting Large Language Model for Machine Translation: A Case Study
https://proceedings.mlr.press/v202/zhang23m.html
Biao Zhang, Barry Haddow, Alexandra Birch
https://proceedings.mlr.press/v202/zhang23m.html
ICML 2023
Research on prompting has shown excellent performance with little or even no supervised training across many tasks. However, prompting for machine translation is still under-explored in the literature. We fill this gap by offering a systematic study on prompting strategies for translation, examining various factors for prompt template and demonstration example selection. We further explore the use of monolingual data and the feasibility of cross-lingual, cross-domain, and sentence-to-document transfer learning in prompting. Extensive experiments with GLM-130B (Zeng et al., 2022) as the testbed show that 1) the number and the quality of prompt examples matter, where using suboptimal examples degenerates translation; 2) several features of prompt examples, such as semantic similarity, show significant Spearman correlation with their prompting performance; yet, none of the correlations are strong enough; 3) using pseudo parallel prompt examples constructed from monolingual data via zero-shot prompting could improve translation; and 4) improved performance is achievable by transferring knowledge from prompt examples selected in other settings. We finally provide an analysis on the model outputs and discuss several problems that prompting still suffers from.
https://proceedings.mlr.press/v202/zhang23n.html
https://proceedings.mlr.press/v202/zhang23n/zhang23n.pdf
https://openreview.net/forum?id=tNRyU4Plfl
On the Interplay Between Misspecification and Sub-optimality Gap in Linear Contextual Bandits
https://proceedings.mlr.press/v202/zhang23n.html
Weitong Zhang, Jiafan He, Zhiyuan Fan, Quanquan Gu
https://proceedings.mlr.press/v202/zhang23n.html
ICML 2023
We study linear contextual bandits in the misspecified setting, where the expected reward function can be approximated by a linear function class up to a bounded misspecification level $\zeta>0$. We propose an algorithm based on a novel data selection scheme, which only selects the contextual vectors with large uncertainty for online regression. We show that, when the misspecification level $\zeta$ is dominated by $\tilde O(\Delta / \sqrt{d})$ with $\Delta$ being the minimal sub-optimality gap and $d$ being the dimension of the contextual vectors, our algorithm enjoys the same gap-dependent regret bound $\tilde O ({d^2} /{\Delta})$ as in the well-specified setting up to logarithmic factors. Given this result, we show that the existing SupLinUCB algorithm (Chu et al., 2011) can also achieve a gap-dependent constant regret bound without the knowledge of sub-optimality gap $\Delta$. Together with a lower bound adapted from Lattimore et al. (2020), our result suggests an interplay between the misspecification level and the sub-optimality gap: (1) the linear contextual bandit model is efficiently learnable when $\zeta \leq \tilde O({\Delta} / \sqrt{d})$; and (2) it is not efficiently learnable when $\zeta \geq \tilde \Omega({\Delta} / {\sqrt{d}})$. Experiments on both synthetic and real-world datasets corroborate our theoretical results.
https://proceedings.mlr.press/v202/zhang23o.html
https://proceedings.mlr.press/v202/zhang23o/zhang23o.pdf
https://openreview.net/forum?id=3jV525Hmqr
When Sparsity Meets Contrastive Models: Less Graph Data Can Bring Better Class-Balanced Representations
https://proceedings.mlr.press/v202/zhang23o.html
Chunhui Zhang, Chao Huang, Yijun Tian, Qianlong Wen, Zhongyu Ouyang, Youhuan Li, Yanfang Ye, Chuxu Zhang
https://proceedings.mlr.press/v202/zhang23o.html
ICML 2023
Graph Neural Networks (GNNs) are powerful models for non-Euclidean data, but their training is often accentuated by massive unnecessary computation: on the one hand, training on non-Euclidean data has relatively high computational cost due to its irregular density properties; on the other hand, the class imbalance property often associated with non-Euclidean data cannot be alleviated by the massiveness of the data, thus hindering the generalisation of the models. To address the above issues, theoretically, we start with a hypothesis about the effectiveness of using a subset of training data for GNNs, which is guaranteed by the gradient distance between the subset and the full set. Empirically, we also observe that a subset of the data can provide informative gradients for model optimization and which changes over time dynamically. We name this phenomenon dynamic data sparsity. Additionally, we find that pruned sparse contrastive models may miss valuable information, leading to a large loss value on the informative subset. Motivated by the above findings, we develop a unified data model dynamic sparsity framework called Data Decantation (DataDec) to address the above challenges. The key idea of DataDec is to identify the informative subset dynamically during the training process by applying sparse graph contrastive learning. The effectiveness of DataDec is comprehensively evaluated on graph benchmark datasets and we also verify its generalizability on image data.
https://proceedings.mlr.press/v202/zhang23p.html
https://proceedings.mlr.press/v202/zhang23p/zhang23p.pdf
https://openreview.net/forum?id=thTcrwTATe
Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation
https://proceedings.mlr.press/v202/zhang23p.html
Qianru Zhang, Chao Huang, Lianghao Xia, Zheng Wang, Siu Ming Yiu, Ruihua Han
https://proceedings.mlr.press/v202/zhang23p.html
ICML 2023
Spatial-temporal graph learning has emerged as the state-of-the-art solution for modeling structured spatial-temporal data in learning region representations for various urban sensing tasks (e.g., crime forecasting, traffic flow prediction). However, most existing models are vulnerable to the quality of the generated region graph due to the inartistic graph-structured information aggregation schema. The ubiquitous spatial-temporal data noise and incompleteness in real-life scenarios bring difficulties to generate high-quality region representations. In this paper, we propose a Spatial-Temporal Adversarial Graph contrastive learning model (STAG) to tackle this challenge for adaptive self-supervised graph augmentation. Specifically, we propose a learnable contrastive learning function that enables the automated distillation of important multi-view self-supervised signals for adaptive spatial-temporal graph augmentation. To enhance the representation discrimination ability and robustness, the designed adversarial contrastive learning mechanism empowers STAG to adaptively identify hard samples for better self-supervision. Finally, a cross-view contrastive learning paradigm is introduced to model the inter-dependencies across view-specific region representations and preserve the underlying relation heterogeneity. We verify the superiority of our STAG method in various spatial-temporal prediction tasks on several benchmark datasets.
https://proceedings.mlr.press/v202/zhang23q.html
https://proceedings.mlr.press/v202/zhang23q/zhang23q.pdf
https://openreview.net/forum?id=17YbAlc1tW
Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models
https://proceedings.mlr.press/v202/zhang23q.html
Guanhua Zhang, Jiabao Ji, Yang Zhang, Mo Yu, Tommi Jaakkola, Shiyu Chang
https://proceedings.mlr.press/v202/zhang23q.html
ICML 2023
Image inpainting refers to the task of generating a complete, natural image based on a partially revealed reference image. Recently, many research interests have been focused on addressing this problem using fixed diffusion models. These approaches typically directly replace the revealed region of the intermediate or final generated images with that of the reference image or its variants. However, since the unrevealed regions are not directly modified to match the context, it results in incoherence between revealed and unrevealed regions. To address the incoherence problem, a small number of methods introduce a rigorous Bayesian framework, but they tend to introduce mismatches between the generated and the reference images due to the approximation errors in computing the posterior distributions. In this paper, we propose CoPaint, which can coherently inpaint the whole image without introducing mismatches. CoPaint also uses the Bayesian framework to jointly modify both revealed and unrevealed regions but approximates the posterior distribution in a way that allows the errors to gradually drop to zero throughout the denoising steps, thus strongly penalizing any mismatches with the reference image. Our experiments verify that CoPaint can outperform the existing diffusion-based methods under both objective and subjective metrics.
https://proceedings.mlr.press/v202/zhang23r.html
https://proceedings.mlr.press/v202/zhang23r/zhang23r.pdf
https://openreview.net/forum?id=AXt40tAbif
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
https://proceedings.mlr.press/v202/zhang23r.html
Jun Zhang, Shuyang Jiang, Jiangtao Feng, Lin Zheng, Lingpeng Kong
https://proceedings.mlr.press/v202/zhang23r.html
ICML 2023
Transformer has achieved remarkable success in language, image, and speech processing. Recently, various efficient attention architectures have been proposed to improve transformer’s efficiency while largely preserving its efficacy, especially in modeling long sequences. A widely-used benchmark to test these efficient methods’ capability on long-range modeling is Long Range Arena (LRA). However, LRA only focuses on the standard bidirectional (or noncausal) self attention, and completely ignores cross attentions and unidirectional (or causal) attentions, which are equally important to downstream applications. In this paper, we propose Comprehensive Attention Benchmark (CAB) under a fine-grained attention taxonomy with four distinguishable attention patterns, namely, noncausal self, causal self, noncausal cross, and causal cross attentions. CAB collects seven real-world tasks from different research areas to evaluate efficient attentions under the four attention patterns. Among these tasks, CAB validates efficient attentions in eight backbone networks to show their generalization across neural architectures. We conduct exhaustive experiments to benchmark the performances of nine widely-used efficient attention architectures designed with different philosophies on CAB. Extensive experimental results also shed light on the fundamental problems of efficient attentions, such as efficiency length against vanilla attention, performance consistency across attention patterns, the benefit of attention mechanisms, and interpolation/extrapolation on long-context language modeling.
https://proceedings.mlr.press/v202/zhang23s.html
https://proceedings.mlr.press/v202/zhang23s/zhang23s.pdf
https://openreview.net/forum?id=68lnr83obY
Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
https://proceedings.mlr.press/v202/zhang23s.html
Shenao Zhang, Wanxin Jin, Zhaoran Wang
https://proceedings.mlr.press/v202/zhang23s.html
ICML 2023
Differentiable physics-based simulators have witnessed remarkable success in robot learning involving contact dynamics, benefiting from their improved accuracy and efficiency in solving the underlying complementarity problem. However, when utilizing the First-Order Policy Gradient (FOPG) method, our theory indicates that the complementarity-based systems suffer from stiffness, leading to an explosion in the gradient variance of FOPG. As a result, optimization becomes challenging due to chaotic and non-smooth loss landscapes. To tackle this issue, we propose a novel approach called Adaptive Barrier Smoothing (ABS), which introduces a class of softened complementarity systems that correspond to barrier-smoothed objectives. With a contact-aware adaptive central-path parameter, ABS reduces the FOPG gradient variance while controlling the gradient bias. We justify the adaptive design by analyzing the roots of the system’s stiffness. Additionally, we establish the convergence of FOPG and show that ABS achieves a reasonable trade-off between the gradient variance and bias by providing their upper bounds. Moreover, we present a variant of FOPG based on complementarity modeling that efficiently fits the contact dynamics by learning the physical parameters. Experimental results on various robotic tasks are provided to support our theory and method.
https://proceedings.mlr.press/v202/zhang23t.html
https://proceedings.mlr.press/v202/zhang23t/zhang23t.pdf
https://openreview.net/forum?id=7JuHd1ZZN4
One-Step Estimator for Permuted Sparse Recovery
https://proceedings.mlr.press/v202/zhang23t.html
Hang Zhang, Ping Li
https://proceedings.mlr.press/v202/zhang23t.html
ICML 2023
This paper considers the unlabeled sparse recovery under multiple measurements, i.e., ${\mathbf{Y}} = {\mathbf{\Pi}}^{\natural} {\mathbf{X}} {\mathbf{B}}^{\natural} + {\mathbf{W}}$, where ${\mathbf{Y}} \in \mathbb{R}^{n\times m}, {\mathbf{\Pi}}^{\natural}\in \mathbb{R}^{n\times n}, {\mathbf{X}} \in \mathbb{R}^{n\times p}, {\mathbf{B}} ^{\natural}\in \mathbb{R}^{p\times m}, {\mathbf{W}}\in \mathbb{R}^{n\times m}$ represents the observations, missing (or incomplete) correspondence information, sensing matrix, sparse signals, and additive sensing noise, respectively. Different from the previous works on multiple measurements ($m > 1$) which all focus on the sufficient samples regime, namely, $n > p$, we consider a sparse matrix $\mathbf{B}^{\natural}$ and investigate the insufficient samples regime (i.e., $n \ll p$) for the first time. To begin with, we establish the lower bound on the sample number and signal-to-noise ratio ($ {\mathsf{SNR}}$) for the correct permutation recovery. Moreover, we present a simple yet effective estimator. Under mild conditions, we show that our estimator can restore the correct correspondence information with high probability. Numerical experiments are presented to corroborate our theoretical claims.
https://proceedings.mlr.press/v202/zhang23u.html
https://proceedings.mlr.press/v202/zhang23u/zhang23u.pdf
https://openreview.net/forum?id=lV7YIPL95i
Quantum Lower Bounds for Finding Stationary Points of Nonconvex Functions
https://proceedings.mlr.press/v202/zhang23u.html
Chenyi Zhang, Tongyang Li
https://proceedings.mlr.press/v202/zhang23u.html
ICML 2023
Quantum computing is an emerging technology that has been rapidly advancing in the past decades. In this paper, we conduct a systematic study of quantum lower bounds on finding $\epsilon$-approximate stationary points of nonconvex functions, and we consider the following two important settings: 1) having access to $p$-th order derivatives; or 2) having access to stochastic gradients. The classical query lower bounds are $\Omega\big(\epsilon^{-\frac{1+p}{p}}\big)$ regarding the first setting and $\Omega(\epsilon^{-4})$ regarding the second setting (or $\Omega(\epsilon^{-3})$ if the stochastic gradient function is mean-squared smooth). In this paper, we extend all these classical lower bounds to the quantum setting. They match the classical algorithmic results respectively, demonstrating that there is no quantum speedup for finding $\epsilon$-stationary points of nonconvex functions with $p$-th order derivative inputs or stochastic gradient inputs, whether with or without the mean-squared smoothness assumption. Technically, we prove our quantum lower bounds by showing that the sequential nature of classical hard instances in all these settings also applies to quantum queries, preventing any quantum speedup other than revealing information of the stationary points sequentially.
https://proceedings.mlr.press/v202/zhang23v.html
https://proceedings.mlr.press/v202/zhang23v/zhang23v.pdf
https://openreview.net/forum?id=zi1iKanf9k
Improving Medical Predictions by Irregular Multimodal Electronic Health Records Modeling
https://proceedings.mlr.press/v202/zhang23v.html
Xinlu Zhang, Shiyang Li, Zhiyu Chen, Xifeng Yan, Linda Ruth Petzold
https://proceedings.mlr.press/v202/zhang23v.html
ICML 2023
Health conditions among patients in intensive care units (ICUs) are monitored via electronic health records (EHRs), composed of numerical time series and lengthy clinical note sequences, both taken at $\textit{irregular}$ time intervals. Dealing with such irregularity in every modality, and integrating irregularity into multimodal representations to improve medical predictions, is a challenging problem. Our method first addresses irregularity in each single modality by (1) modeling irregular time series by dynamically incorporating hand-crafted imputation embeddings into learned interpolation embeddings via a gating mechanism, and (2) casting a series of clinical note representations as multivariate irregular time series and tackling irregularity via a time attention mechanism. We further integrate irregularity in multimodal fusion with an interleaved attention mechanism across temporal steps. To the best of our knowledge, this is the first work to thoroughly model irregularity in multimodalities for improving medical predictions. Our proposed methods for two medical prediction tasks consistently outperforms state-of-the-art (SOTA) baselines in each single modality and multimodal fusion scenarios. Specifically, we observe relative improvements of 6.5%, 3.6%, and 4.3% in F1 for time series, clinical notes, and multimodal fusion, respectively. These results demonstrate the effectiveness of our methods and the importance of considering irregularity in multimodal EHRs.
https://proceedings.mlr.press/v202/zhang23w.html
https://proceedings.mlr.press/v202/zhang23w/zhang23w.pdf
https://openreview.net/forum?id=YDC5jTS3LR
FedCR: Personalized Federated Learning Based on Across-Client Common Representation with Conditional Mutual Information Regularization
https://proceedings.mlr.press/v202/zhang23w.html
Hao Zhang, Chenglin Li, Wenrui Dai, Junni Zou, Hongkai Xiong
https://proceedings.mlr.press/v202/zhang23w.html
ICML 2023
In personalized federated learning (PFL), multiple clients train customized models to fulfill their personal objectives, which, however, are prone to overfitting to local data due to the heterogeneity and scarcity of local data. To address this, we propose from the information-theoretic perspective a personalized federated learning framework based on the common representation learned across clients, named FedCR. Specifically, we introduce to the local client update a regularizer that aims at minimizing the discrepancy between local and global conditional mutual information (CMI), such that clients are encouraged to learn and exploit the common representation. Upon this, each client learns individually a customized predictor (head), while the extractor (body) remains to be aggregated by the server. Our CMI regularizer leads to a theoretically sound alignment between the local and global stochastic feature distributions in terms of their Kullback-Leibler (KL) divergence. More importantly, by modeling the global joint feature distribution as a product of multiple local feature distributions, clients can efficiently extract diverse information from the global data but without need of the raw data from other clients. We further show that noise injection via feature alignment and ensemble of local predictors in FedCR would help enhance its generalization capability. Experiments on benchmark datasets demonstrate a consistent performance gain and better generalization behavior of FedCR.
https://proceedings.mlr.press/v202/zhang23x.html
https://proceedings.mlr.press/v202/zhang23x/zhang23x.pdf
https://openreview.net/forum?id=Kg2al3GXBR
On the Optimality of Misspecified Kernel Ridge Regression
https://proceedings.mlr.press/v202/zhang23x.html
Haobo Zhang, Yicheng Li, Weihao Lu, Qian Lin
https://proceedings.mlr.press/v202/zhang23x.html
ICML 2023
In the misspecified kernel ridge regression problem, researchers usually assume the underground true function $f_{\rho}^{\star} \in [\mathcal{H}]^{s}$, a less-smooth interpolation space of a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$ for some $s\in (0,1)$. The existing minimax optimal results require $\left\Vert f_{\rho}^{\star} \right \Vert_{L^{\infty}} < \infty$ which implicitly requires $s > \alpha_{0}$ where $\alpha_{0} \in (0,1) $ is the embedding index, a constant depending on $\mathcal{H}$. Whether the KRR is optimal for all $s\in (0,1)$ is an outstanding problem lasting for years. In this paper, we show that KRR is minimax optimal for any $s\in (0,1)$ when the $\mathcal{H}$ is a Sobolev RKHS.
https://proceedings.mlr.press/v202/zhang23y.html
https://proceedings.mlr.press/v202/zhang23y/zhang23y.pdf
https://openreview.net/forum?id=NcbY2UOfko
Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction
https://proceedings.mlr.press/v202/zhang23y.html
Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li
https://proceedings.mlr.press/v202/zhang23y.html
ICML 2023
Due to the often limited communication bandwidth of edge devices, most existing federated learning (FL) methods randomly select only a subset of devices to participate in training at each communication round. Compared with engaging all the available clients, such a random-selection mechanism could lead to significant performance degradation on non-IID (independent and identically distributed) data. In this paper, we present our key observation that the essential reason resulting in such performance degradation is the class-imbalance of the grouped data from randomly selected clients. Based on this observation, we design an efficient heterogeneity-aware client sampling mechanism, namely, Federated Class-balanced Sampling (Fed-CBS), which can effectively reduce class-imbalance of the grouped dataset from the intentionally selected clients. We first propose a measure of class-imbalance which can be derived in a privacy-preserving way. Based on this measure, we design a computation-efficient client sampling strategy such that the actively selected clients will generate a more class-balanced grouped dataset with theoretical guarantees. Experimental results show that Fed-CBS outperforms the status quo approaches in terms of test accuracy and the rate of convergence while achieving comparable or even better performance than the ideal setting where all the available clients participate in the FL training.
https://proceedings.mlr.press/v202/zhang23z.html
https://proceedings.mlr.press/v202/zhang23z/zhang23z.pdf
https://openreview.net/forum?id=gfdK6nK8AI
Learning Subpocket Prototypes for Generalizable Structure-based Drug Design
https://proceedings.mlr.press/v202/zhang23z.html
Zaixi Zhang, Qi Liu
https://proceedings.mlr.press/v202/zhang23z.html
ICML 2023
Generating molecules with high binding affinities to target proteins (a.k.a. structure-based drug design) is a fundamental and challenging task in drug discovery. Recently, deep generative models have achieved remarkable success in generating 3D molecules conditioned on the protein pocket. However, most existing methods consider molecular generation for protein pockets independently while neglecting the underlying connections such as subpocket-level similarities. Subpockets are the local protein environments of ligand fragments and pockets with similar subpockets may bind the same molecular fragment (motif) even though their overall structures are different. Therefore, the trained models can hardly generalize to unseen protein pockets in real-world applications. In this paper, we propose a novel method DrugGPS for generalizable structure-based drug design. With the biochemical priors, we propose to learn subpocket prototypes and construct a global interaction graph to model the interactions between subpocket prototypes and molecular motifs. Moreover, a hierarchical graph transformer encoder and motif-based 3D molecule generation scheme are used to improve the model’s performance. The experimental results show that our model consistently outperforms baselines in generating realistic drug candidates with high affinities in challenging out-of-distribution settings.
https://proceedings.mlr.press/v202/zhang23aa.html
https://proceedings.mlr.press/v202/zhang23aa/zhang23aa.pdf
https://openreview.net/forum?id=AMuNQEUmGr
No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation
https://proceedings.mlr.press/v202/zhang23aa.html
Feilong Zhang, Xianming Liu, Shiyi Lin, Gang Wu, Xiong Zhou, Junjun Jiang, Xiangyang Ji
https://proceedings.mlr.press/v202/zhang23aa.html
ICML 2023
Federated learning suffers from a latency bottleneck induced by network stragglers, which hampers the training efficiency significantly. In addition, due to the heterogeneous data distribution and security requirements, simple and fast averaging aggregation is not feasible anymore. Instead, complicated aggregation operations, such as knowledge distillation, are required. The time cost for complicated aggregation becomes a new bottleneck that limits the computational efficiency of FL. In this work, we claim that the root cause of training latency actually lies in the aggregation-then-broadcasting workflow of the server. By swapping the computational order of aggregation and broadcasting, we propose a novel and efficient parallel federated learning (PFL) framework that unlocks the edge nodes during global computation and the central server during local computation. This fully asynchronous and parallel pipeline enables handling complex aggregation and network stragglers, allowing flexible device participation as well as achieving scalability in computation. We theoretically prove that synchronous and asynchronous PFL can achieve a similar convergence rate as vanilla FL. Extensive experiments empirically show that our framework brings up to $5.56\times$ speedup compared with traditional FL. Code is available at: https://github.com/Hypervoyager/PFL.
https://proceedings.mlr.press/v202/zhang23ab.html
https://proceedings.mlr.press/v202/zhang23ab/zhang23ab.pdf
https://openreview.net/forum?id=xjGITa8bxl
The Wisdom of Hindsight Makes Language Models Better Instruction Followers
https://proceedings.mlr.press/v202/zhang23ab.html
Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, Joseph E. Gonzalez
https://proceedings.mlr.press/v202/zhang23ab.html
ICML 2023
Reinforcement learning has seen wide success in finetuning large language models to better align with instructions via human feedback. The so-called algorithm, Reinforcement Learning with Human Feedback (RLHF) demonstrates impressive performance on the GPT series models. However, the underlying reinforcement learning algorithm is complex and requires additional training for reward and value networks. In this paper, we consider an alternative approach: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner. Such an algorithm doesn’t require any additional parameters except for the original language model and maximally reuses the pretraining pipeline. To achieve this, we formulate instruction alignment problem for language models as a goal-reaching problem in decision making. We propose Hindsight Instruction Relabeling (HIR), a novel algorithm for aligning language models with instructions. The resulting two-stage algorithm shed light to a family of reward-free approaches that utilize the hindsightly relabeled instructions based on feedback. We evaluate the performance of HIR extensively on 12 challenging BigBench reasoning tasks and show that HIR outperforms the baseline algorithms and is comparable to or even surpasses supervised fine-tuning. The implementation of HIR is available at https://github.com/tianjunz/HIR.
https://proceedings.mlr.press/v202/zhang23ac.html
https://proceedings.mlr.press/v202/zhang23ac/zhang23ac.pdf
https://openreview.net/forum?id=QveIdCjDUi
Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score
https://proceedings.mlr.press/v202/zhang23ac.html
Shuhai Zhang, Feng Liu, Jiahao Yang, Yifan Yang, Changsheng Li, Bo Han, Mingkui Tan
https://proceedings.mlr.press/v202/zhang23ac.html
ICML 2023
Adversarial detection aims to determine whether a given sample is an adversarial one based on the discrepancy between natural and adversarial distributions. Unfortunately, estimating or comparing two data distributions is extremely difficult, especially in high-dimension spaces. Recently, the gradient of log probability density (a.k.a., score) w.r.t. the sample is used as an alternative statistic to compute. However, we find that the score is sensitive in identifying adversarial samples due to insufficient information with one sample only. In this paper, we propose a new statistic called expected perturbation score (EPS), which is essentially the expected score of a sample after various perturbations. Specifically, to obtain adequate information regarding one sample, we perturb it by adding various noises to capture its multi-view observations. We theoretically prove that EPS is a proper statistic to compute the discrepancy between two samples under mild conditions. In practice, we can use a pre-trained diffusion model to estimate EPS for each sample. Last, we pro- pose an EPS-based adversarial detection (EPS- AD) method, in which we develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples. We also prove that the EPS-based MMD between natural and adversarial samples is larger than that among natural samples. Extensive experiments show the superior adversarial detection performance of our EPS-AD.
https://proceedings.mlr.press/v202/zhang23ad.html
https://proceedings.mlr.press/v202/zhang23ad/zhang23ad.pdf
https://openreview.net/forum?id=uIOw2ZE1U8
On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
https://proceedings.mlr.press/v202/zhang23ad.html
Shijun Zhang, Jianfeng Lu, Hongkai Zhao
https://proceedings.mlr.press/v202/zhang23ad.html
ICML 2023
This paper explores the expressive power of deep neural networks through the framework of function compositions. We demonstrate that the repeated compositions of a single fixed-size ReLU network exhibit surprising expressive power, despite the limited expressive capabilities of the individual network itself. Specifically, we prove by construction that $\mathcal{L}_2\circ \boldsymbol{g}^{\circ r}\circ \boldsymbol{\mathcal{L}}_1$ can approximate $1$-Lipschitz continuous functions on $[0,1]^d$ with an error $\mathcal{O}(r^{-1/d})$, where $\boldsymbol{g}$ is realized by a fixed-size ReLU network, $\boldsymbol{\mathcal{L}}_1$ and $\mathcal{L}_2$ are two affine linear maps matching the dimensions, and $\boldsymbol{g}^{\circ r}$ denotes the $r$-times composition of $\boldsymbol{g}$. Furthermore, we extend such a result to generic continuous functions on $[0,1]^d$ with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has immense approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network.
https://proceedings.mlr.press/v202/zhang23ae.html
https://proceedings.mlr.press/v202/zhang23ae/zhang23ae.pdf
https://openreview.net/forum?id=1SvVuUNN5I
Bi-directional Masks for Efficient N:M Sparse Training
https://proceedings.mlr.press/v202/zhang23ae.html
Yuxin Zhang, Yiting Luo, Mingbao Lin, Yunshan Zhong, Jingjing Xie, Fei Chao, Rongrong Ji
https://proceedings.mlr.press/v202/zhang23ae.html
ICML 2023
We focus on addressing the dense backward propagation issue for training efficiency of N:M fine-grained sparsity that preserves at most N out of M consecutive weights and achieves practical speedups supported by the N:M sparse tensor core. Therefore, we present a novel method of Bi-directional Masks (Bi-Mask) with its two central innovations in: 1) Separate sparse masks in the two directions of forward and backward propagation to obtain training acceleration. It disentangles the forward and backward weight sparsity and overcomes the very dense gradient computation. 2) An efficient weight row permutation method to maintain performance. It picks up the permutation candidate with the most eligible N:M weight blocks in the backward to minimize the gradient gap between traditional unidirectional masks and our bi-directional masks. Compared with existing uni-directional scenario that applies a transposable mask and enables backward acceleration, our Bi-Mask is experimentally demonstrated to be more superior in performance. Also, our Bi-Mask performs on par with or even better than methods that fail to achieve backward acceleration. Project of this paper is available at https://github.com/zyxxmu/Bi-Mask.
https://proceedings.mlr.press/v202/zhang23af.html
https://proceedings.mlr.press/v202/zhang23af/zhang23af.pdf
https://openreview.net/forum?id=gHfybro5Sj
Towards Unbiased Training in Federated Open-world Semi-supervised Learning
https://proceedings.mlr.press/v202/zhang23af.html
Jie Zhang, Xiaosong Ma, Song Guo, Wenchao Xu
https://proceedings.mlr.press/v202/zhang23af.html
ICML 2023
Federated Semi-supervised Learning (FedSSL) has emerged as a new paradigm for allowing distributed clients to collaboratively train a machine learning model over scarce labeled data and abundant unlabeled data. However, existing works for FedSSL rely on a closed-world assumption that all local training data and global testing data are from seen classes observed in the labeled dataset. It is crucial to go one step further: adapting FL models to an open-world setting, where unseen classes exist in the unlabeled data. In this paper, we propose a novel Federatedopen-world Semi-Supervised Learning (FedoSSL) framework, which can solve the key challenge in distributed and open-world settings, i.e., the biased training process for heterogeneously distributed unseen classes. Specifically, since the advent of a certain unseen class depends on a client basis, the locally unseen classes (exist in multiple clients) are likely to receive differentiated superior aggregation effects than the globally unseen classes (exist only in one client). We adopt an uncertainty-aware suppressed loss to alleviate the biased training between locally unseen and globally unseen classes. Besides, we enable a calibration module supplementary to the global aggregation to avoid potential conflicting knowledge transfer caused by inconsistent data distribution among different clients. The proposed FedoSSL can be easily adapted to state-of-the-art FL methods, which is also validated via extensive experiments on benchmarks and real-world datasets (CIFAR-10, CIFAR-100 and CINIC-10).
https://proceedings.mlr.press/v202/zhang23ag.html
https://proceedings.mlr.press/v202/zhang23ag/zhang23ag.pdf
https://openreview.net/forum?id=hwHBaL7wur
Interactive Object Placement with Reinforcement Learning
https://proceedings.mlr.press/v202/zhang23ag.html
Shengping Zhang, Quanling Meng, Qinglin Liu, Liqiang Nie, Bineng Zhong, Xiaopeng Fan, Rongrong Ji
https://proceedings.mlr.press/v202/zhang23ag.html
ICML 2023
Object placement aims to insert a foreground object into a background image with a suitable location and size to create a natural composition. To predict a diverse distribution of placements, existing methods usually establish a one-to-one mapping from random vectors to the placements. However, these random vectors are not interpretable, which prevents users from interacting with the object placement process. To address this problem, we propose an Interactive Object Placement method with Reinforcement Learning, dubbed IOPRE, to make sequential decisions for producing a reasonable placement given an initial location and size of the foreground. We first design a novel action space to flexibly and stably adjust the location and size of the foreground while preserving its aspect ratio. Then, we propose a multi-factor state representation learning method, which integrates composition image features and sinusoidal positional embeddings of the foreground to make decisions for selecting actions. Finally, we design a hybrid reward function that combines placement assessment and the number of steps to ensure that the agent learns to place objects in the most visually pleasing and semantically appropriate location. Experimental results on the OPA dataset demonstrate that the proposed method achieves state-of-the-art performance in terms of plausibility and diversity.
https://proceedings.mlr.press/v202/zhang23ah.html
https://proceedings.mlr.press/v202/zhang23ah/zhang23ah.pdf
https://openreview.net/forum?id=bbKEGbS7aN
Optimal Shrinkage for Distributed Second-Order Optimization
https://proceedings.mlr.press/v202/zhang23ah.html
Fangzhao Zhang, Mert Pilanci
https://proceedings.mlr.press/v202/zhang23ah.html
ICML 2023
In this work, we address the problem of Hessian inversion bias in distributed second-order optimization algorithms. We introduce a novel shrinkage-based estimator for the resolvent of gram matrices which is asymptotically unbiased, and characterize its non-asymptotic convergence rate in the isotropic case. We apply this estimator to bias correction of Newton steps in distributed second-order optimization algorithms, as well as randomized sketching based methods. We examine the bias present in the naive averaging-based distributed Newton’s method using analytical expressions and contrast it with our proposed biasfree approach. Our approach leads to significant improvements in convergence rate compared to standard baselines and recent proposals, as shown through experiments on both real and synthetic datasets.
https://proceedings.mlr.press/v202/zhang23ai.html
https://proceedings.mlr.press/v202/zhang23ai/zhang23ai.pdf
https://openreview.net/forum?id=LwSKljRST0
"Why did the Model Fail?": Attributing Model Performance Changes to Distribution Shifts
https://proceedings.mlr.press/v202/zhang23ai.html
Haoran Zhang, Harvineet Singh, Marzyeh Ghassemi, Shalmali Joshi
https://proceedings.mlr.press/v202/zhang23ai.html
ICML 2023
Machine learning models frequently experience performance drops under distribution shifts. The underlying cause of such shifts may be multiple simultaneous factors such as changes in data quality, differences in specific covariate distributions, or changes in the relationship between label and features. When a model does fail during deployment, attributing performance change to these factors is critical for the model developer to identify the root cause and take mitigating actions. In this work, we introduce the problem of attributing performance differences between environments to distribution shifts in the underlying data generating mechanisms. We formulate the problem as a cooperative game where the players are distributions. We define the value of a set of distributions to be the change in model performance when only this set of distributions has changed between environments, and derive an importance weighting method for computing the value of an arbitrary set of distributions. The contribution of each distribution to the total performance change is then quantified as its Shapley value. We demonstrate the correctness and utility of our method on synthetic, semi-synthetic, and real-world case studies, showing its effectiveness in attributing performance changes to a wide range of distribution shifts.
https://proceedings.mlr.press/v202/zhang23aj.html
https://proceedings.mlr.press/v202/zhang23aj/zhang23aj.pdf
https://openreview.net/forum?id=6aB43K50T0
Learning Regions of Interest for Bayesian Optimization with Adaptive Level-Set Estimation
https://proceedings.mlr.press/v202/zhang23aj.html
Fengxue Zhang, Jialin Song, James C Bowden, Alexander Ladd, Yisong Yue, Thomas Desautels, Yuxin Chen
https://proceedings.mlr.press/v202/zhang23aj.html
ICML 2023
We study Bayesian optimization (BO) in high-dimensional and non-stationary scenarios. Existing algorithms for such scenarios typically require extensive hyperparameter tuning, which limits their practical effectiveness. We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest (ROI) as a superlevel-set of a nonparametric probabilistic model such as a Gaussian process (GP). Our approach is easy to tune, and is able to focus on local region of the optimization space that can be tackled by existing BO methods. The key idea is to use two probabilistic models: a coarse GP to identify the ROI, and a localized GP for optimization within the ROI. We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO without ROI filtering. We demonstrate empirically the effectiveness of BALLET on both synthetic and real-world optimization tasks.
https://proceedings.mlr.press/v202/zhang23ak.html
https://proceedings.mlr.press/v202/zhang23ak/zhang23ak.pdf
https://openreview.net/forum?id=i0CVEg8kAN
A Category-theoretical Meta-analysis of Definitions of Disentanglement
https://proceedings.mlr.press/v202/zhang23ak.html
Yivan Zhang, Masashi Sugiyama
https://proceedings.mlr.press/v202/zhang23ak.html
ICML 2023
Disentangling the factors of variation in data is a fundamental concept in machine learning and has been studied in various ways by different researchers, leading to a multitude of definitions. Despite the numerous empirical studies, more theoretical research is needed to fully understand the defining properties of disentanglement and how different definitions relate to each other. This paper presents a meta-analysis of existing definitions of disentanglement, using category theory as a unifying and rigorous framework. We propose that the concepts of the cartesian and monoidal products should serve as the core of disentanglement. With these core concepts, we show the similarities and crucial differences in dealing with (i) functions, (ii) equivariant maps, (iii) relations, and (iv) stochastic maps. Overall, our meta-analysis deepens our understanding of disentanglement and its various formulations and can help researchers navigate different definitions and choose the most appropriate one for their specific context.
https://proceedings.mlr.press/v202/zhang23al.html
https://proceedings.mlr.press/v202/zhang23al/zhang23al.pdf
https://openreview.net/forum?id=SuBZ98IyJO
On the Convergence of SARSA with Linear Function Approximation
https://proceedings.mlr.press/v202/zhang23al.html
Shangtong Zhang, Remi Tachet Des Combes, Romain Laroche
https://proceedings.mlr.press/v202/zhang23al.html
ICML 2023
SARSA, a classical on-policy control algorithm for reinforcement learning, is known to chatter when combined with linear function approximation: SARSA does not diverge but oscillates in a bounded region. However, little is known about how fast SARSA converges to that region and how large the region is. In this paper, we make progress towards this open problem by showing the convergence rate of projected SARSA to a bounded region. Importantly, the region is much smaller than the region that we project into, provided that the the magnitude of the reward is not too large. Existing works regarding the convergence of linear SARSA to a fixed point all require the Lipschitz constant of SARSA’s policy improvement operator to be sufficiently small; our analysis instead applies to arbitrary Lipschitz constants and thus characterizes the behavior of linear SARSA for a new regime.
https://proceedings.mlr.press/v202/zhang23am.html
https://proceedings.mlr.press/v202/zhang23am/zhang23am.pdf
https://openreview.net/forum?id=2DiZF15Kgc
AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation
https://proceedings.mlr.press/v202/zhang23am.html
Yifan Zhang, Xue Wang, Kexin Jin, Kun Yuan, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
https://proceedings.mlr.press/v202/zhang23am.html
ICML 2023
Many recent machine learning tasks focus to develop models that can generalize to unseen distributions. Domain generalization (DG) has become one of the key topics in various fields. Several literatures show that DG can be arbitrarily hard without exploiting target domain information. To address this issue, test-time adaptive (TTA) methods are proposed. Existing TTA methods require offline target data or extra sophisticated optimization procedures during the inference stage. In this work, we adopt Non-Parametric Classifier to perform the test-time Adaptation (AdaNPC). In particular, we construct a memory that contains the feature and label pairs from training domains. During inference, given a test instance, AdaNPC first recalls $k$ closed samples from the memory to vote for the prediction, and then the test feature and predicted label are added to the memory. In this way, the sample distribution in the memory can be gradually changed from the training distribution towards the test distribution with very little extra computation cost. We theoretically justify the rationality behind the proposed method. Besides, we test our model on extensive numerical experiments. AdaNPC significantly outperforms competitive baselines on various DG benchmarks. In particular, when the adaptation target is a series of domains, the adaptation accuracy of AdaNPC is $50$% higher than advanced TTA methods.
https://proceedings.mlr.press/v202/zhang23an.html
https://proceedings.mlr.press/v202/zhang23an/zhang23an.pdf
https://openreview.net/forum?id=lGpttiCpFZ
On the Generalization of Multi-modal Contrastive Learning
https://proceedings.mlr.press/v202/zhang23an.html
Qi Zhang, Yifei Wang, Yisen Wang
https://proceedings.mlr.press/v202/zhang23an.html
ICML 2023
Multi-modal contrastive learning (MMCL) has recently garnered considerable interest due to its superior performance in visual tasks, achieved by embedding multi-modal data, such as visual-language pairs. However, there still lack theoretical understandings of how MMCL extracts useful visual representation from multi-modal pairs, and particularly, how MMCL outperforms previous approaches like self-supervised contrastive learning (SSCL). In this paper, by drawing an intrinsic connection between MMCL and asymmetric matrix factorization, we establish the first generalization guarantees of MMCL for visual downstream tasks. Based on this framework, we further unify MMCL and SSCL by showing that MMCL implicitly performs SSCL with (pseudo) positive pairs induced by text pairs. Through this unified perspective, we characterize the advantage of MMCL by showing that text pairs induce more semantically consistent and diverse positive pairs, which, according to our analysis, provably benefit downstream generalization. Inspired by this finding, we propose several methods to significantly improve the downstream performance of SSCL on ImageNet by leveraging multi-modal information. Code is available at https://github.com/PKU-ML/CLIP-Help-SimCLR.
https://proceedings.mlr.press/v202/zhang23ao.html
https://proceedings.mlr.press/v202/zhang23ao/zhang23ao.pdf
https://openreview.net/forum?id=6LJvlAiD9z
ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction
https://proceedings.mlr.press/v202/zhang23ao.html
Wang Zhang, Tsui-Wei Weng, Subhro Das, Alexandre Megretski, Luca Daniel, Lam M. Nguyen
https://proceedings.mlr.press/v202/zhang23ao.html
ICML 2023
Deep neural networks (DNN) have shown great capacity of modeling a dynamical system; nevertheless, they usually do not obey physics constraints such as conservation laws. This paper proposes a new learning framework named $\textbf{ConCerNet}$ to improve the trustworthiness of the DNN based dynamics modeling to endow the invariant properties. $\textbf{ConCerNet}$ consists of two steps: (i) a contrastive learning method to automatically capture the system invariants (i.e. conservation properties) along the trajectory observations; (ii) a neural projection layer to guarantee that the learned dynamics models preserve the learned invariants. We theoretically prove the functional relationship between the learned latent representation and the unknown system invariant function. Experiments show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics by a large margin. With neural network based parameterization and no dependence on prior knowledge, our method can be extended to complex and large-scale dynamics by leveraging an autoencoder.
https://proceedings.mlr.press/v202/zhang23ap.html
https://proceedings.mlr.press/v202/zhang23ap/zhang23ap.pdf
https://openreview.net/forum?id=fvTgh4MNUV
Towards Trustworthy Explanation: On Causal Rationalization
https://proceedings.mlr.press/v202/zhang23ap.html
Wenbo Zhang, Tong Wu, Yunlong Wang, Yong Cai, Hengrui Cai
https://proceedings.mlr.press/v202/zhang23ap.html
ICML 2023
With recent advances in natural language processing, rationalization becomes an essential self-explaining diagram to disentangle the black box by selecting a subset of input texts to account for the major variation in prediction. Yet, existing association-based approaches on rationalization cannot identify true rationales when two or more snippets are highly inter-correlated and thus provide a similar contribution to prediction accuracy, so-called spuriousness. To address this limitation, we novelly leverage two causal desiderata, non-spuriousness and efficiency, into rationalization from the causal inference perspective. We formally define a series of probabilities of causation based on a newly proposed structural causal model of rationalization, with its theoretical identification established as the main component of learning necessary and sufficient rationales. The superior performance of the proposed causal rationalization is demonstrated on real-world review and medical datasets with extensive experiments compared to state-of-the-art methods.
https://proceedings.mlr.press/v202/zhang23aq.html
https://proceedings.mlr.press/v202/zhang23aq/zhang23aq.pdf
https://openreview.net/forum?id=FQzJ3zBYLO
Demystifying Uneven Vulnerability of Link Stealing Attacks against Graph Neural Networks
https://proceedings.mlr.press/v202/zhang23aq.html
He Zhang, Bang Wu, Shuo Wang, Xiangwen Yang, Minhui Xue, Shirui Pan, Xingliang Yuan
https://proceedings.mlr.press/v202/zhang23aq.html
ICML 2023
While graph neural networks (GNNs) dominate the state-of-the-art for exploring graphs in real-world applications, they have been shown to be vulnerable to a growing number of privacy attacks. For instance, link stealing is a well-known membership inference attack (MIA) on edges that infers the presence of an edge in a GNN’s training graph. Recent studies on independent and identically distributed data (e.g., images) have empirically demonstrated that individuals from different groups suffer from different levels of privacy risks to MIAs, i.e., uneven vulnerability. However, theoretical evidence of such uneven vulnerability is missing. In this paper, we first present theoretical evidence of the uneven vulnerability of GNNs to link stealing attacks, which lays the foundation for demystifying such uneven risks among different groups of edges. We further demonstrate a group-based attack paradigm to expose the practical privacy harm to GNN users derived from the uneven vulnerability of edges. Finally, we empirically validate the existence of obvious uneven vulnerability on nine real-world datasets (e.g., about 25% AUC difference between different groups in the Credit graph). Compared with existing methods, the outperformance of our group-based attack paradigm confirms that customising different strategies for different groups results in more effective privacy attacks.
https://proceedings.mlr.press/v202/zhang23ar.html
https://proceedings.mlr.press/v202/zhang23ar/zhang23ar.pdf
https://openreview.net/forum?id=2MF4aDRfiE
Provable Dynamic Fusion for Low-Quality Multimodal Data
https://proceedings.mlr.press/v202/zhang23ar.html
Qingyang Zhang, Haitao Wu, Changqing Zhang, Qinghua Hu, Huazhu Fu, Joey Tianyi Zhou, Xi Peng
https://proceedings.mlr.press/v202/zhang23ar.html
ICML 2023
The inherent challenge of multimodal fusion is to precisely capture the cross-modal correlation and flexibly conduct cross-modal interaction. To fully release the value of each modality and mitigate the influence of low-quality multimodal data, dynamic multimodal fusion emerges as a promising learning paradigm. Despite its widespread use, theoretical justifications in this field are still notably lacking. Can we design a provably robust multimodal fusion method? This paper provides theoretical understandings to answer this question under a most popular multimodal fusion framework from the generalization perspective. We proceed to reveal that several uncertainty estimation solutions are naturally available to achieve robust multimodal fusion. Then a novel multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is proposed, which can improve the performance in terms of classification accuracy and model robustness. Extensive experimental results on multiple benchmarks can support our findings.
https://proceedings.mlr.press/v202/zhang23as.html
https://proceedings.mlr.press/v202/zhang23as/zhang23as.pdf
https://openreview.net/forum?id=SP01yVIC2o
ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval
https://proceedings.mlr.press/v202/zhang23as.html
Kexun Zhang, Xianjun Yang, William Yang Wang, Lei Li
https://proceedings.mlr.press/v202/zhang23as.html
ICML 2023
Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.
https://proceedings.mlr.press/v202/zhang23at.html
https://proceedings.mlr.press/v202/zhang23at/zhang23at.pdf
https://openreview.net/forum?id=pWVeL1NuK7
Nearly Optimal Competitive Ratio for Online Allocation Problems with Two-sided Resource Constraints and Finite Requests
https://proceedings.mlr.press/v202/zhang23at.html
Qixin Zhang, Wenbing Ye, Zaiyi Chen, Haoyuan Hu, Enhong Chen, Yu Yang
https://proceedings.mlr.press/v202/zhang23at.html
ICML 2023
In this paper, we investigate the online allocation problem of maximizing the overall revenue subject to both lower and upper bound constraints. Compared to the extensively studied online problems with only resource upper bounds, the two-sided constraints affect the prospects of resource consumption more severely. As a result, only limited violations of constraints or pessimistic competitive bounds could be guaranteed. To tackle the challenge, we define a measure of feasibility $\xi^*$ to evaluate the hardness of this problem, and estimate this measurement by an optimization routine with theoretical guarantees. We propose an online algorithm adopting a constructive framework, where we initialize a threshold price vector using the estimation, then dynamically update the price vector and use it for decision-making at each step. It can be shown that the proposed algorithm is $\big(1-O(\frac{\varepsilon}{\xi^*-\varepsilon})\big)$ or $\big(1-O(\frac{\varepsilon}{\xi^*-\sqrt{\varepsilon}})\big)$ competitive with high probability for $\xi^*$ known or unknown respectively. To the best of our knowledge, this is the first result establishing a nearly optimal competitive algorithm for solving two-sided constrained online allocation problems with a high probability of feasibility.
https://proceedings.mlr.press/v202/zhang23au.html
https://proceedings.mlr.press/v202/zhang23au/zhang23au.pdf
https://openreview.net/forum?id=GSqoaNo7Qg
Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio Detection
https://proceedings.mlr.press/v202/zhang23au.html
Xiaohui Zhang, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Chu Yuan Zhang
https://proceedings.mlr.press/v202/zhang23au.html
ICML 2023
Current fake audio detection algorithms have achieved promising performances on most datasets. However, their performance may be significantly degraded when dealing with audio of a different dataset. The orthogonal weight modification to overcome catastrophic forgetting does not consider the similarity of genuine audio across different datasets. To overcome this limitation, we propose a continual learning algorithm for fake audio detection to overcome catastrophic forgetting, called Regularized Adaptive Weight Modification (RAWM). When fine-tuning a detection network, our approach adaptively computes the direction of weight modification according to the ratio of genuine utterances and fake utterances. The adaptive modification direction ensures the network can effectively detect fake audio on the new dataset while preserving its knowledge of old model, thus mitigating catastrophic forgetting. In addition, genuine audio collected from quite different acoustic conditions may skew their feature distribution, so we introduce a regularization constraint to force the network to remember the old distribution in this regard. Our method can easily be generalized to related fields, like speech emotion recognition. We also evaluate our approach across multiple datasets and obtain a significant performance improvement on cross-dataset experiments.
https://proceedings.mlr.press/v202/zhang23av.html
https://proceedings.mlr.press/v202/zhang23av/zhang23av.pdf
https://openreview.net/forum?id=tgXxVlWkmb
Coder Reviewer Reranking for Code Generation
https://proceedings.mlr.press/v202/zhang23av.html
Tianyi Zhang, Tao Yu, Tatsunori Hashimoto, Mike Lewis, Wen-Tau Yih, Daniel Fried, Sida Wang
https://proceedings.mlr.press/v202/zhang23av.html
ICML 2023
Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions. Inspired by collaborative programming, we propose Coder-Reviewer reranking. We augment Coder language models from past work, which generate programs given language instructions, with Reviewer models, which evaluate the likelihood of the instruction given the generated programs. We perform an extensive study across six datasets with eight models from three model families. Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement (up to 17% absolute accuracy gain) over reranking with the Coder model only. When combined with executability filtering, Coder-Reviewer reranking can often outperform the minimum Bayes risk method. Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyperparameters.
https://proceedings.mlr.press/v202/zhang23aw.html
https://proceedings.mlr.press/v202/zhang23aw/zhang23aw.pdf
https://openreview.net/forum?id=ksMYhj4XGf
DP-Fast MH: Private, Fast, and Accurate Metropolis-Hastings for Large-Scale Bayesian Inference
https://proceedings.mlr.press/v202/zhang23aw.html
Wanrong Zhang, Ruqi Zhang
https://proceedings.mlr.press/v202/zhang23aw.html
ICML 2023
Bayesian inference provides a principled framework for learning from complex data and reasoning under uncertainty. It has been widely applied in machine learning tasks such as medical diagnosis, drug design, and policymaking. In these common applications, data can be highly sensitive. Differential privacy (DP) offers data analysis tools with powerful worst-case privacy guarantees and has been developed as the leading approach in privacy-preserving data analysis. In this paper, we study Metropolis-Hastings (MH), one of the most fundamental MCMC methods, for large-scale Bayesian inference under differential privacy. While most existing private MCMC algorithms sacrifice accuracy and efficiency to obtain privacy, we provide the first exact and fast DP MH algorithm, using only a minibatch of data in most iterations. We further reveal, for the first time, a three-way trade-off among privacy, scalability (i.e. the batch size), and efficiency (i.e. the convergence rate), theoretically characterizing how privacy affects the utility and computational cost in Bayesian inference. We empirically demonstrate the effectiveness and efficiency of our algorithm in various experiments.
https://proceedings.mlr.press/v202/zhang23ax.html
https://proceedings.mlr.press/v202/zhang23ax/zhang23ax.pdf
https://openreview.net/forum?id=F5bcSnILOZ
Nearly-tight Bounds for Deep Kernel Learning
https://proceedings.mlr.press/v202/zhang23ax.html
Yifan Zhang, Min-Ling Zhang
https://proceedings.mlr.press/v202/zhang23ax.html
ICML 2023
The generalization analysis of deep kernel learning (DKL) is a crucial and open problem of kernel methods for deep learning. The implicit nonlinear mapping in DKL makes existing methods of capacity-based generalization analysis for deep learning invalid. In an attempt to overcome this challenge and make up for the gap in the generalization theory of DKL, we develop an analysis method based on the composite relationship of function classes and derive capacity-based bounds with mild dependence on the depth, which generalizes learning theory bounds to deep kernels and serves as theoretical guarantees for the generalization of DKL. In this paper, we prove novel and nearly-tight generalization bounds based on the uniform covering number and the Rademacher chaos complexity for deep (multiple) kernel machines. In addition, for some common classes, we estimate their uniform covering numbers and Rademacher chaos complexities by bounding their pseudo-dimensions and kernel pseudo-dimensions, respectively. The mild bounds without strong assumptions partially explain the good generalization ability of deep learning combined with kernel methods.
https://proceedings.mlr.press/v202/zhang23ay.html
https://proceedings.mlr.press/v202/zhang23ay/zhang23ay.pdf
https://openreview.net/forum?id=1H1irbEaGV
OpenFE: Automated Feature Generation with Expert-level Performance
https://proceedings.mlr.press/v202/zhang23ay.html
Tianping Zhang, Zheyu Aqa Zhang, Zhiyuan Fan, Haoyan Luo, Fengyuan Liu, Qian Liu, Wei Cao, Li Jian
https://proceedings.mlr.press/v202/zhang23ay.html
ICML 2023
The goal of automated feature generation is to liberate machine learning experts from the laborious task of manual feature generation, which is crucial for improving the learning performance of tabular data. The major challenge in automated feature generation is to efficiently and accurately identify effective features from a vast pool of candidate features. In this paper, we present OpenFE, an automated feature generation tool that provides competitive results against machine learning experts. OpenFE achieves high efficiency and accuracy with two components: 1) a novel feature boosting method for accurately evaluating the incremental performance of candidate features and 2) a two-stage pruning algorithm that performs feature pruning in a coarse-to-fine manner. Extensive experiments on ten benchmark datasets show that OpenFE outperforms existing baseline methods by a large margin. We further evaluate OpenFE in two Kaggle competitions with thousands of data science teams participating. In the two competitions, features generated by OpenFE with a simple baseline model can beat 99.3% and 99.6% data science teams respectively. In addition to the empirical results, we provide a theoretical perspective to show that feature generation can be beneficial in a simple yet representative setting.
https://proceedings.mlr.press/v202/zhang23az.html
https://proceedings.mlr.press/v202/zhang23az/zhang23az.pdf
https://openreview.net/forum?id=VCe6WWB5Wg
Optimal Horizon-Free Reward-Free Exploration for Linear Mixture MDPs
https://proceedings.mlr.press/v202/zhang23az.html
Junkai Zhang, Weitong Zhang, Quanquan Gu
https://proceedings.mlr.press/v202/zhang23az.html
ICML 2023
We study reward-free reinforcement learning (RL) with linear function approximation, where the agent works in two phases: (1) in the exploration phase, the agent interacts with the environment but cannot access the reward; and (2) in the planning phase, the agent is given a reward function and is expected to find a near-optimal policy based on samples collected in the exploration phase. The sample complexities of existing reward-free algorithms have a polynomial dependence on the planning horizon, which makes them intractable for long planning horizon RL problems. In this paper, we propose a new reward-free algorithm for learning linear mixture Markov decision processes (MDPs), where the transition probability can be parameterized as a linear combination of known feature mappings. At the core of our algorithm is uncertainty-weighted value-targeted regression with exploration-driven pseudo-reward and a high-order moment estimator for the aleatoric and epistemic uncertainties. When the total reward is bounded by $1$, we show that our algorithm only needs to explore $\tilde O\left( d^2\varepsilon^{-2}\right)$ episodes to find an $\varepsilon$-optimal policy, where $d$ is the dimension of the feature mapping. The sample complexity of our algorithm only has a polylogarithmic dependence on the planning horizon and therefore is "horizon-free”. In addition, we provide an $\Omega\left(d^2\varepsilon^{-2}\right)$ sample complexity lower bound, which matches the sample complexity of our algorithm up to logarithmic factors, suggesting that our algorithm is optimal.
https://proceedings.mlr.press/v202/zhang23ba.html
https://proceedings.mlr.press/v202/zhang23ba/zhang23ba.pdf
https://openreview.net/forum?id=FMomWFNh5d
Unlocking Slot Attention by Changing Optimal Transport Costs
https://proceedings.mlr.press/v202/zhang23ba.html
Yan Zhang, David W. Zhang, Simon Lacoste-Julien, Gertjan J. Burghouts, Cees G. M. Snoek
https://proceedings.mlr.press/v202/zhang23ba.html
ICML 2023
Slot attention is a powerful method for object-centric modeling in images and videos. However, its set-equivariance limits its ability to handle videos with a dynamic number of objects because it cannot break ties. To overcome this limitation, we first establish a connection between slot attention and optimal transport. Based on this new perspective we propose MESH (Minimize Entropy of Sinkhorn): a cross-attention module that combines the tiebreaking properties of unregularized optimal transport with the speed of regularized optimal transport. We evaluate slot attention using MESH on multiple object-centric learning benchmarks and find significant improvements over slot attention in every setting.
https://proceedings.mlr.press/v202/zhang23bb.html
https://proceedings.mlr.press/v202/zhang23bb/zhang23bb.pdf
https://openreview.net/forum?id=r2aulTsJ8V
Towards a Persistence Diagram that is Robust to Noise and Varied Densities
https://proceedings.mlr.press/v202/zhang23bb.html
Hang Zhang, Kaifeng Zhang, Kai Ming Ting, Ye Zhu
https://proceedings.mlr.press/v202/zhang23bb.html
ICML 2023
Recent works have identified that existing methods, which construct persistence diagrams in Topological Data Analysis (TDA), are not robust to noise and varied densities in a point cloud. We analyze the necessary properties of an approach that can address these two issues, and propose a new filter function for TDA based on a new data-dependent kernel which possesses these properties. Our empirical evaluation reveals that the proposed filter function provides a better means for t-SNE visualization and SVM classification than three existing methods of TDA.
https://proceedings.mlr.press/v202/zhang23bc.html
https://proceedings.mlr.press/v202/zhang23bc/zhang23bc.pdf
https://openreview.net/forum?id=hGJLN2Ys4c
Robust Situational Reinforcement Learning in Face of Context Disturbances
https://proceedings.mlr.press/v202/zhang23bc.html
Jinpeng Zhang, Yufeng Zheng, Chuheng Zhang, Li Zhao, Lei Song, Yuan Zhou, Jiang Bian
https://proceedings.mlr.press/v202/zhang23bc.html
ICML 2023
In many real-world tasks, some parts of state features, called contexts, are independent of action signals, e.g., customer demand in inventory control, speed of lead car in autonomous driving, etc. One of the challenges of reinforcement learning in these applications is that the true context transitions can be easily exposed some unknown source of contamination, leading to a shift of context transitions between source domains and target domains, which could cause performance degradation for RL algorithms. However, existing methods on robust RL aim at learning robust policies against the deviations of the entire system dynamics. To tackle this problem, this paper proposes the framework of robust situational Markov decision process (RS-MDP) which captures the possible deviations of context transitions explicitly. To scale to large context space, we introduce the softmin smoothed robust Bellman operator to learn the robust Q-value approximately, and apply our RS-MDP framework to existing RL algorithm SAC to learn the desired robust policies. We conduct experiments on several robot control tasks with dynamic contexts and inventory control tasks to demonstrate that our algorithm can generalize better and more robust against deviations of context transitions, and outperform existing robust RL algorithms.
https://proceedings.mlr.press/v202/zhang23bd.html
https://proceedings.mlr.press/v202/zhang23bd/zhang23bd.pdf
https://openreview.net/forum?id=Si9pBgOGeD
Patch-level Contrastive Learning via Positional Query for Visual Pre-training
https://proceedings.mlr.press/v202/zhang23bd.html
Shaofeng Zhang, Qiang Zhou, Zhibin Wang, Fan Wang, Junchi Yan
https://proceedings.mlr.press/v202/zhang23bd.html
ICML 2023
Dense contrastive learning (DCL) has been recently explored for learning localized information for dense prediction tasks (e.g., detection and segmentation). It still suffers the difficulty of mining pixels/patches correspondence between two views. A simple way is inputting the same view twice and aligning the pixel/patch representation. However, it would reduce the variance of inputs, and hurts the performance. We propose a plug-in method PQCL (Positional Query for patch-level Contrastive Learning), which allows performing patch-level contrasts between two views with exact patch correspondence. Besides, by using positional queries, PQCL increases the variance of inputs, to enhance training. We apply PQCL to popular transformer-based CL frameworks (DINO and iBOT, and evaluate them on classification, detection and segmentation tasks, where our method obtains stable improvements, especially for dense tasks. It achieves new state-of-the-art in most settings. Code is available at https://github.com/Sherrylone/Query_Contrastive.
https://proceedings.mlr.press/v202/zhao23a.html
https://proceedings.mlr.press/v202/zhao23a/zhao23a.pdf
https://openreview.net/forum?id=N6MQv7U9vD
Men Also Do Laundry: Multi-Attribute Bias Amplification
https://proceedings.mlr.press/v202/zhao23a.html
Dora Zhao, Jerone Andrews, Alice Xiang
https://proceedings.mlr.press/v202/zhao23a.html
ICML 2023
The phenomenon of $\textit{bias amplification}$ occurs when models amplify training set biases at test time. Existing metrics measure bias amplification with respect to single annotated attributes (e.g., $\texttt{computer}$). However, large-scale datasets typically consist of instances with multiple attribute annotations (e.g., $\{\texttt{computer}, \texttt{keyboard}\}$). We demonstrate models can learn to exploit correlations with respect to multiple attributes, which are not accounted for by current metrics. Moreover, we show that current metrics can give the erroneous impression that little to no bias amplification has occurred as they aggregate positive and negative bias scores. Further, these metrics lack an ideal value, making them difficult to interpret. To address these shortcomings, we propose a new metric: $\textit{Multi-Attribute Bias Amplification}$. We validate our metric’s utility through a bias amplification analysis on the COCO, imSitu, and CelebA datasets. Finally, we benchmark bias mitigation methods using our proposed metric, suggesting possible avenues for future bias mitigation efforts.
https://proceedings.mlr.press/v202/zhao23b.html
https://proceedings.mlr.press/v202/zhao23b/zhao23b.pdf
https://openreview.net/forum?id=wLAMOoL0KD
Rockmate: an Efficient, Fast, Automatic and Generic Tool for Re-materialization in PyTorch
https://proceedings.mlr.press/v202/zhao23b.html
Xunyi Zhao, Théotime Le Hellard, Lionel Eyraud-Dubois, Julia Gusak, Olivier Beaumont
https://proceedings.mlr.press/v202/zhao23b.html
ICML 2023
We propose Rockmate to control the memory requirements when training PyTorch DNN models. Rockmate is an automatic tool that starts from the model code and generates an equivalent model, using a predefined amount of memory for activations, at the cost of a few re-computations. Rockmate automatically detects the structure of computational and data dependencies and rewrites the initial model as a sequence of complex blocks. We show that such a structure is widespread and can be found in many models in the literature (Transformer based models, ResNet, RegNets,...). This structure allows us to solve the problem in a fast and efficient way, using an adaptation of Checkmate (too slow on the whole model but general) at the level of individual blocks and an adaptation of Rotor (fast but limited to sequential models) at the level of the sequence itself. We show through experiments on many models that Rockmate is as fast as Rotor and as efficient as Checkmate, and that it allows in many cases to obtain a significantly lower memory consumption for activations (by a factor of 2 to 5) for a rather negligible overhead (of the order of 10% to 20%). Rockmate is open source and available at https://github.com/topal-team/rockmate.
https://proceedings.mlr.press/v202/zhao23c.html
https://proceedings.mlr.press/v202/zhao23c/zhao23c.pdf
https://openreview.net/forum?id=0kRayuGKOP
Revisiting Structured Variational Autoencoders
https://proceedings.mlr.press/v202/zhao23c.html
Yixiu Zhao, Scott Linderman
https://proceedings.mlr.press/v202/zhao23c.html
ICML 2023
Structured variational autoencoders (SVAEs) combine probabilistic graphical model priors on latent variables, deep neural networks to link latent variables to observed data, and structure-exploiting algorithms for approximate posterior inference. These models are particularly appealing for sequential data, where the prior can capture temporal dependencies. However, despite their conceptual elegance, SVAEs have proven difficult to implement, and more general approaches have been favored in practice. Here, we revisit SVAEs using modern machine learning tools and demonstrate their advantages over more general alternatives in terms of both accuracy and efficiency. First, we develop a modern implementation for hardware acceleration, parallelization, and automatic differentiation of the message passing algorithms at the core of the SVAE. Second, we show that by exploiting structure in the prior, the SVAE learns more accurate models and posterior distributions, which translate into improved performance on prediction tasks. Third, we show how the SVAE can naturally handle missing data, and we leverage this ability to develop a novel, self-supervised training approach. Altogether, these results show that the time is ripe to revisit structured variational autoencoders.
https://proceedings.mlr.press/v202/zhao23d.html
https://proceedings.mlr.press/v202/zhao23d/zhao23d.pdf
https://openreview.net/forum?id=q0K36FPtOd
On Pitfalls of Test-Time Adaptation
https://proceedings.mlr.press/v202/zhao23d.html
Hao Zhao, Yuejiang Liu, Alexandre Alahi, Tao Lin
https://proceedings.mlr.press/v202/zhao23d.html
ICML 2023
Test-Time Adaptation (TTA) has recently gained significant attention as a new paradigm for tackling distribution shifts. Despite the sheer number of existing methods, the inconsistent experimental conditions and lack of standardization in prior literature make it difficult to measure their actual efficacies and progress. To address this issue, we present a large-scale open-sourced Test-Time Adaptation Benchmark, dubbed TTAB, which includes nine state-of-the-art algorithms, a diverse array of distribution shifts, and two comprehensive evaluation protocols. Through extensive experiments, we identify three common pitfalls in prior efforts: (i) choosing appropriate hyper-parameter, especially for model selection, is exceedingly difficult due to online batch dependency; (ii) the effectiveness of TTA varies greatly depending on the quality of the model being adapted; (iii) even under optimal algorithmic conditions, existing methods still systematically struggle with certain types of distribution shifts. Our findings suggest that future research in the field should be more transparent about their experimental conditions, ensure rigorous evaluations on a broader set of models and shifts, and re-examine the assumptions underlying the potential success of TTA for practical applications.
https://proceedings.mlr.press/v202/zhao23e.html
https://proceedings.mlr.press/v202/zhao23e/zhao23e.pdf
https://openreview.net/forum?id=iAgQfF3atY
Addressing Budget Allocation and Revenue Allocation in Data Market Environments Using an Adaptive Sampling Algorithm
https://proceedings.mlr.press/v202/zhao23e.html
Boxin Zhao, Boxiang Lyu, Raul Castro Fernandez, Mladen Kolar
https://proceedings.mlr.press/v202/zhao23e.html
ICML 2023
High-quality machine learning models are dependent on access to high-quality training data. When the data are not already available, it is tedious and costly to obtain them. Data markets help with identifying valuable training data: model consumers pay to train a model, the market uses that budget to identify data and train the model (the budget allocation problem), and finally the market compensates data providers according to their data contribution (revenue allocation problem). For example, a bank could pay the data market to access data from other financial institutions to train a fraud detection model. Compensating data contributors requires understanding data’s contribution to the model; recent efforts to solve this revenue allocation problem based on the Shapley value are inefficient to lead to practical data markets. In this paper, we introduce a new algorithm to solve budget allocation and revenue allocation problems simultaneously in linear time. The new algorithm employs an adaptive sampling process that selects data from those providers who are contributing the most to the model. Better data means that the algorithm accesses those providers more often, and more frequent accesses corresponds to higher compensation. Furthermore, the algorithm can be deployed in both centralized and federated scenarios, boosting its applicability. We provide theoretical guarantees for the algorithm that show the budget is used efficiently and the properties of revenue allocation are similar to Shapley’s. Finally, we conduct an empirical evaluation to show the performance of the algorithm in practical scenarios and when compared to other baselines. Overall, we believe that the new algorithm paves the way for the implementation of practical data markets.
https://proceedings.mlr.press/v202/zhao23f.html
https://proceedings.mlr.press/v202/zhao23f/zhao23f.pdf
https://openreview.net/forum?id=Xzc4CKcmnj
X-Paste: Revisiting Scalable Copy-Paste for Instance Segmentation using CLIP and StableDiffusion
https://proceedings.mlr.press/v202/zhao23f.html
Hanqing Zhao, Dianmo Sheng, Jianmin Bao, Dongdong Chen, Dong Chen, Fang Wen, Lu Yuan, Ce Liu, Wenbo Zhou, Qi Chu, Weiming Zhang, Nenghai Yu
https://proceedings.mlr.press/v202/zhao23f.html
ICML 2023
Copy-Paste is a simple and effective data augmentation strategy for instance segmentation. By randomly pasting object instances onto new background images, it creates new training data for free and significantly boosts the segmentation performance, especially for rare object categories. Although diverse, high-quality object instances used in Copy-Paste result in more performance gain, previous works utilize object instances either from human-annotated instance segmentation datasets or rendered from 3D object models, and both approaches are too expensive to scale up to obtain good diversity. In this paper, we revisit Copy-Paste at scale with the power of newly emerged zero-shot recognition models (e.g., CLIP) and text2image models (e.g., StableDiffusion). We demonstrate for the first time that using a text2image model to generate images or zero-shot recognition model to filter noisily crawled images for different object categories is a feasible way to make Copy-Paste truly scalable. To make such success happen, we design a data acquisition and processing framework, dubbed “X-Paste", upon which a systematic study is conducted. On the LVIS dataset, X-Paste provides impressive improvements over the strong baseline CenterNet2 with Swin-L as the backbone. Specifically, it archives +2.6 box AP and +2.1 mask AP gains on all classes and even more significant gains with +6.8 box AP +6.5 mask AP on long-tail classes.
https://proceedings.mlr.press/v202/zhao23g.html
https://proceedings.mlr.press/v202/zhao23g/zhao23g.pdf
https://openreview.net/forum?id=qFk9EUzycd
Revisiting Simple Regret: Fast Rates for Returning a Good Arm
https://proceedings.mlr.press/v202/zhao23g.html
Yao Zhao, Connor Stephens, Csaba Szepesvari, Kwang-Sung Jun
https://proceedings.mlr.press/v202/zhao23g.html
ICML 2023
Simple regret is a natural and parameter-free performance criterion for pure exploration in multi-armed bandits yet is less popular than the probability of missing the best arm or an $\epsilon$-good arm, perhaps due to lack of easy ways to characterize it. In this paper, we make a significant progress on minimizing simple regret in both data-rich ($T\ge n$) and data-poor regime ($T \le n$) where $n$ is the number of arms and $T$ is the number of samples. At its heart is our improved instance-dependent analysis of the well-known Sequential Halving (SH) algorithm where we bound the probability of returning an arm whose mean reward is not within $\epsilon$ from the best (i.e., not $\epsilon$-good) for any choice of $\epsilon>0$, although $\epsilon$ is not an input to SH. Our bound not only leads to an optimal worst-case simple regret bound of $\sqrt{n/T}$ up to logarithmic factors but also essentially matches the instance-dependent lower bound for returning an $\epsilon$-good arm reported by Katz-Samuels and Jamieson (2020). For the more challenging data-poor regime, we propose Bracketing SH (BSH) that enjoys the same improvement even without sampling each arm at least once. Our empirical study shows that BSH outperforms existing methods on real-world tasks.
https://proceedings.mlr.press/v202/zhao23h.html
https://proceedings.mlr.press/v202/zhao23h/zhao23h.pdf
https://openreview.net/forum?id=WBWb1FU8iz
Transformed Distribution Matching for Missing Value Imputation
https://proceedings.mlr.press/v202/zhao23h.html
He Zhao, Ke Sun, Amir Dezfouli, Edwin V. Bonilla
https://proceedings.mlr.press/v202/zhao23h.html
ICML 2023
We study the problem of imputing missing values in a dataset, which has important applications in many domains. The key to missing value imputation is to capture the data distribution with incomplete samples and impute the missing values accordingly. In this paper, by leveraging the fact that any two batches of data with missing values come from the same data distribution, we propose to impute the missing values of two batches of samples by transforming them into a latent space through deep invertible functions and matching them distributionally. To learn the transformations and impute the missing values simultaneously, a simple and well-motivated algorithm is proposed. Our algorithm has fewer hyperparameters to fine-tune and generates high-quality imputations regardless of how missing values are generated. Extensive experiments over a large number of datasets and competing benchmark algorithms show that our method achieves state-of-the-art performance.
https://proceedings.mlr.press/v202/zhao23i.html
https://proceedings.mlr.press/v202/zhao23i/zhao23i.pdf
https://openreview.net/forum?id=dfLRMF5Hss
Protecting Language Generation Models via Invisible Watermarking
https://proceedings.mlr.press/v202/zhao23i.html
Xuandong Zhao, Yu-Xiang Wang, Lei Li
https://proceedings.mlr.press/v202/zhao23i.html
ICML 2023
Language generation models have been an increasingly powerful enabler to many applications. Many such models offer free or affordable API access which makes them potentially vulnerable to model extraction attacks through distillation. To protect intellectual property (IP) and make fair use of these models, various techniques such as lexical watermarking and synonym replacement have been proposed. However, these methods can be nullified by obvious countermeasures such as “synonym randomization”. To address this issue, we propose GINSW, a novel method to protect text generation models from being stolen through distillation. The key idea of our method is to inject secret signals into the probability vector of the decoding steps for each target token. We can then detect the secret message by probing a suspect model to tell if it is distilled from the protected one. Experimental results show that GINSW can effectively identify instances of IP infringement with minimal impact on the generation quality of protected APIs. Our method demonstrates an absolute improvement of 19 to 29 points on mean average precision (mAP) in detecting suspects compared to previous methods against watermark removal attacks.
https://proceedings.mlr.press/v202/zhao23j.html
https://proceedings.mlr.press/v202/zhao23j/zhao23j.pdf
https://openreview.net/forum?id=V4jD1KmnQz
Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning
https://proceedings.mlr.press/v202/zhao23j.html
Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee
https://proceedings.mlr.press/v202/zhao23j.html
ICML 2023
Policy optimization methods with function approximation are widely used in multi-agent reinforcement learning. However, it remains elusive how to design such algorithms with statistical guarantees. Leveraging a multi-agent performance difference lemma that characterizes the landscape of multi-agent policy optimization, we find that the localized action value function serves as an ideal descent direction for each local policy. Motivated by the observation, we present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO. We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate. We extend our algorithm to the off-policy setting and introduce pessimism to policy evaluation, which aligns with experiments. To our knowledge, this is the first provably convergent multi-agent PPO algorithm in cooperative Markov games.
https://proceedings.mlr.press/v202/zhao23k.html
https://proceedings.mlr.press/v202/zhao23k/zhao23k.pdf
https://openreview.net/forum?id=IkhTCX9x5i
Simplified Temporal Consistency Reinforcement Learning
https://proceedings.mlr.press/v202/zhao23k.html
Yi Zhao, Wenshuai Zhao, Rinu Boney, Juho Kannala, Joni Pajarinen
https://proceedings.mlr.press/v202/zhao23k.html
ICML 2023
Reinforcement learning (RL) is able to solve complex sequential decision-making tasks but is currently limited by sample efficiency and required computation. To improve sample efficiency, recent work focuses on model-based RL which interleaves model learning with planning. Recent methods further utilize policy learning, value estimation, and, self-supervised learning as auxiliary objectives. In this paper we show that, surprisingly, a simple representation learning approach relying only on a latent dynamics model trained by latent temporal consistency is sufficient for high-performance RL. This applies when using pure planning with a dynamics model conditioned on the representation, but, also when utilizing the representation as policy and value function features in model-free RL. In experiments, our approach learns an accurate dynamics model to solve challenging high-dimensional locomotion tasks with online planners while being 4.1$\times$ faster to train compared to ensemble-based methods. With model-free RL without planning, especially on high-dimensional tasks, such as the Deepmind Control Suite Humanoid and Dog tasks, our approach outperforms model-free methods by a large margin and matches model-based methods’ sample efficiency while training 2.4$\times$ faster.
https://proceedings.mlr.press/v202/zhao23l.html
https://proceedings.mlr.press/v202/zhao23l/zhao23l.pdf
https://openreview.net/forum?id=zBShO1Vmf0
RLEG: Vision-Language Representation Learning with Diffusion-based Embedding Generation
https://proceedings.mlr.press/v202/zhao23l.html
Liming Zhao, Kecheng Zheng, Yun Zheng, Deli Zhao, Jingren Zhou
https://proceedings.mlr.press/v202/zhao23l.html
ICML 2023
Vision-language representation learning models (e.g., CLIP) have achieved state-of-the-art performance on various downstream tasks, which usually need large-scale training data to learn discriminative representation. Recent progress on generative diffusion models (e.g., DALL-E 2) has demonstrated that diverse high-quality samples can be synthesized by randomly sampling from generative distribution. By virtue of generative capability in this paper, we propose a novel vision-language Representation Learning method with diffusion-based Embedding Generation (RLEG), which exploits diffusion models to generate feature embedding online for learning effective vision-language representation. Specifically, we first adopt image and text encoders to extract the corresponding embeddings. Secondly, pretrained diffusion-based embedding generators are harnessed to transfer the embedding modality online between vision and language domains. The embeddings generated from the generators are then served as augmented embedding-level samples, which are applied to contrastive learning with the variant of the CLIP framework. Experimental results show that the proposed method could learn effective representation and achieve state-of-the-art performance on various tasks including image classification, image-text retrieval, object detection, semantic segmentation, and text-conditional image generation.
https://proceedings.mlr.press/v202/zhao23m.html
https://proceedings.mlr.press/v202/zhao23m/zhao23m.pdf
https://openreview.net/forum?id=p6h6jZb5jr
Optimal Online Generalized Linear Regression with Stochastic Noise and Its Application to Heteroscedastic Bandits
https://proceedings.mlr.press/v202/zhao23m.html
Heyang Zhao, Dongruo Zhou, Jiafan He, Quanquan Gu
https://proceedings.mlr.press/v202/zhao23m.html
ICML 2023
We study the problem of online generalized linear regression in the stochastic setting, where the label is generated from a generalized linear model with possibly unbounded additive noise. We provide a sharp analysis of the classical follow-the-regularized-leader (FTRL) algorithm to cope with the label noise. More specifically, for $\sigma$-sub-Gaussian label noise, our analysis provides a regret upper bound of $O(\sigma^2 d \log T) + o(\log T)$, where $d$ is the dimension of the input vector, $T$ is the total number of rounds. We also prove an $\Omega(\sigma^2d\log(T/d))$ lower bound for stochastic online linear regression, which indicates that our upper bound is nearly optimal. In addition, we extend our analysis to a more refined Bernstein noise condition. As an application, we study generalized linear bandits with heterogeneous noise and propose an algorithm based on FTRL to achieve the first variance-aware regret bound.
https://proceedings.mlr.press/v202/zhao23n.html
https://proceedings.mlr.press/v202/zhao23n/zhao23n.pdf
https://openreview.net/forum?id=73a5boeEYT
Does Continual Learning Equally Forget All Parameters?
https://proceedings.mlr.press/v202/zhao23n.html
Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
https://proceedings.mlr.press/v202/zhao23n.html
ICML 2023
Distribution shift (e.g., task or domain shift) in continual learning (CL) usually results in catastrophic forgetting of previously learned knowledge. Although it can be alleviated by repeatedly replaying buffered data, the every-step replay is time-consuming. In this paper, we study which modules in neural networks are more prone to forgetting by investigating their training dynamics during CL. Our proposed metrics show that only a few modules are more task-specific and sensitive to task change, while others can be shared across tasks as common knowledge. Hence, we attribute forgetting mainly to the former and find that finetuning them only on a small buffer at the end of any CL method can bring non-trivial improvement. Due to the small number of finetuned parameters, such ”Forgetting Prioritized Finetuning (FPF)” is efficient in computation. We further propose a more efficient and simpler method that entirely removes the every-step replay and replaces them by only $k$-times of FPF periodically triggered during CL. Surprisingly, this ”$k$-FPF” performs comparably to FPF and outperforms the SOTA CL methods but significantly reduces their computational overhead and cost. In experiments on several benchmarks of class- and domain-incremental CL, FPF consistently improves existing CL methods by a large margin, and $k$-FPF further excels in efficiency without degrading the accuracy. We also empirically studied the impact of buffer size, epochs per task, and finetuning modules on the cost and accuracy of our methods.
https://proceedings.mlr.press/v202/zhao23o.html
https://proceedings.mlr.press/v202/zhao23o/zhao23o.pdf
https://openreview.net/forum?id=QNz7DCibUS
Online Learning in Stackelberg Games with an Omniscient Follower
https://proceedings.mlr.press/v202/zhao23o.html
Geng Zhao, Banghua Zhu, Jiantao Jiao, Michael Jordan
https://proceedings.mlr.press/v202/zhao23o.html
ICML 2023
We study the problem of online learning in a two-player decentralized cooperative Stackelberg game. In each round, the leader first takes an action, followed by the follower who takes their action after observing the leader’s move. The goal of the leader is to learn to minimize the cumulative regret based on the history of interactions. Differing from the traditional formulation of repeated Stackelberg games, we assume the follower is omniscient, with full knowledge of the true reward, and that they always best-respond to the leader’s actions. We analyze the sample complexity of regret minimization in this repeated Stackelberg game. We show that depending on the reward structure, the existence of the omniscient follower may change the sample complexity drastically, from constant to exponential, even for linear cooperative Stackelberg games. This poses unique challenges for the learning process of the leader and the subsequent regret analysis.
https://proceedings.mlr.press/v202/zheng23a.html
https://proceedings.mlr.press/v202/zheng23a/zheng23a.pdf
https://openreview.net/forum?id=1F2Opw8CGA
Structure-informed Language Models Are Protein Designers
https://proceedings.mlr.press/v202/zheng23a.html
Zaixiang Zheng, Yifan Deng, Dongyu Xue, Yi Zhou, Fei Ye, Quanquan Gu
https://proceedings.mlr.press/v202/zheng23a.html
ICML 2023
This paper demonstrates that language models are strong structure-based protein designers. We present LM-Design, a generic approach to reprogramming sequence-based protein language models (pLMs), that have learned massive sequential evolutionary knowledge from the universe of natural protein sequences, to acquire an immediate capability to design preferable protein sequences for given folds. We conduct a structural surgery on pLMs, where a lightweight structural adapter is implanted into pLMs and endows it with structural awareness. During inference, iterative refinement is performed to effectively optimize the generated protein sequences. Experiments show that LM-Design improves the state-of-the-art results by a large margin, leading to 4% to 12% accuracy gains in sequence recovery (e.g., 55.65%/56.63% on CATH 4.2/4.3 single-chain benchmarks, and $>$60% when designing protein complexes). We provide extensive and in-depth analyses, which verify that LM-Design can (1) indeed leverage both structural and sequential knowledge to accurately handle structurally non-deterministic regions, (2) benefit from scaling data and model size, and (3) generalize to other proteins (e.g., antibodies and de novo proteins).
https://proceedings.mlr.press/v202/zheng23b.html
https://proceedings.mlr.press/v202/zheng23b/zheng23b.pdf
https://openreview.net/forum?id=8Ln8Ai9kq1
Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
https://proceedings.mlr.press/v202/zheng23b.html
Qinqing Zheng, Mikael Henaff, Brandon Amos, Aditya Grover
https://proceedings.mlr.press/v202/zheng23b.html
ICML 2023
Natural agents can effectively learn from multiple data sources that differ in size, quality, and types of measurements. We study this heterogeneity in the context of offline reinforcement learning (RL) by introducing a new, practically motivated semi-supervised setting. Here, an agent has access to two sets of trajectories: labelled trajectories containing state, action and reward triplets at every timestep, along with unlabelled trajectories that contain only state and reward information. For this setting, we develop and study a simple meta-algorithmic pipeline that learns an inverse dynamics model on the labelled data to obtain proxy-labels for the unlabelled data, followed by the use of any offline RL algorithm on the true and proxy-labelled trajectories. Empirically, we find this simple pipeline to be highly successful — on several D4RL benchmarks (Fu et al., 2020), certain offline RL algorithms can match the performance of variants trained on a fully labelled dataset even when we label only 10% of trajectories which are highly suboptimal. To strengthen our understanding, we perform a large-scale controlled empirical study investigating the interplay of data-centric properties of the labelled and unlabelled datasets, with algorithmic design choices (e.g., choice of inverse dynamics, offline RL algorithm) to identify general trends and best practices for training RL agents on semi-supervised offline datasets.
https://proceedings.mlr.press/v202/zheng23c.html
https://proceedings.mlr.press/v202/zheng23c/zheng23c.pdf
https://openreview.net/forum?id=jVR2fF8x8x
Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs
https://proceedings.mlr.press/v202/zheng23c.html
Kaiwen Zheng, Cheng Lu, Jianfei Chen, Jun Zhu
https://proceedings.mlr.press/v202/zheng23c.html
ICML 2023
Diffusion models have exhibited excellent performance in various domains. The probability flow ordinary differential equation (ODE) of diffusion models (i.e., diffusion ODEs) is a particular case of continuous normalizing flows (CNFs), which enables deterministic inference and exact likelihood evaluation. However, the likelihood estimation results by diffusion ODEs are still far from those of the state-of-the-art likelihood-based generative models. In this work, we propose several improved techniques for maximum likelihood estimation for diffusion ODEs, including both training and evaluation perspectives. For training, we propose velocity parameterization and explore variance reduction techniques for faster convergence. We also derive an error-bounded high-order flow matching objective for finetuning, which improves the ODE likelihood and smooths its trajectory. For evaluation, we propose a novel training-free truncated-normal dequantization to fill the training-evaluation gap commonly existing in diffusion ODEs. Building upon these techniques, we achieve state-of-the-art likelihood estimation results on image datasets (2.56 on CIFAR-10, 3.43/3.69 on ImageNet-32) without variational dequantization or data augmentation.
https://proceedings.mlr.press/v202/zheng23d.html
https://proceedings.mlr.press/v202/zheng23d/zheng23d.pdf
https://openreview.net/forum?id=gWC3Q3pyHe
Fast Sampling of Diffusion Models via Operator Learning
https://proceedings.mlr.press/v202/zheng23d.html
Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, Anima Anandkumar
https://proceedings.mlr.press/v202/zheng23d.html
ICML 2023
Diffusion models have found widespread adoption in various areas. However, their sampling process is slow because it requires hundreds to thousands of network evaluations to emulate a continuous process defined by differential equations. In this work, we use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models. Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method that generates images with only one model forward pass. We propose diffusion model sampling with neural operator (DSNO) that maps the initial condition, i.e., Gaussian distribution, to the continuous-time solution trajectory of the reverse diffusion process. To model the temporal correlations along the trajectory, we introduce temporal convolution layers that are parameterized in the Fourier space into the given diffusion model backbone. We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
https://proceedings.mlr.press/v202/zheng23e.html
https://proceedings.mlr.press/v202/zheng23e/zheng23e.pdf
https://openreview.net/forum?id=afz7OOt6xK
Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation
https://proceedings.mlr.press/v202/zheng23e.html
Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, Kevin Wang, Yihan Xi, Dejia Xu, Zhangyang Wang
https://proceedings.mlr.press/v202/zheng23e.html
ICML 2023
For a complicated algorithm, its implementation by a human programmer usually starts with outlining a rough control flow followed by iterative enrichments, eventually yielding carefully generated syntactic structures and variables in a hierarchy. However, state-of-the-art large language models generate codes in a single pass, without intermediate warm-ups to reflect the structured thought process of "outline-then-detail". Inspired by the recent success of chain-of-thought prompting, we propose ChainCoder, a program synthesis language model that generates Python code progressively, i.e. from coarse to fine in multiple passes. We first decompose source code into layout frame components and accessory components via abstract syntax tree parsing to construct a hierarchical representation. We then reform our prediction target into a multi-pass objective, each pass generates a subsequence, which is concatenated in the hierarchy. Finally, a tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples. Extensive evaluations show that ChainCoder outperforms state-of-the-arts, demonstrating that our progressive generation eases the reasoning procedure and guides the language model to generate higher-quality solutions. Our codes are available at: https://github.com/VITA-Group/ChainCoder.
https://proceedings.mlr.press/v202/zheng23f.html
https://proceedings.mlr.press/v202/zheng23f/zheng23f.pdf
https://openreview.net/forum?id=I6xrc6HXIa
Revisiting Discriminative vs. Generative Classifiers: Theory and Implications
https://proceedings.mlr.press/v202/zheng23f.html
Chenyu Zheng, Guoqiang Wu, Fan Bao, Yue Cao, Chongxuan Li, Jun Zhu
https://proceedings.mlr.press/v202/zheng23f.html
ICML 2023
A large-scale deep model pre-trained on massive labeled or unlabeled data transfers well to downstream tasks. Linear evaluation freezes parameters in the pre-trained model and trains a linear classifier separately, which is efficient and attractive for transfer. However, little work has investigated the classifier in linear evaluation except for the default logistic regression. Inspired by the statistical efficiency of naive Bayes, the paper revisits the classical topic on discriminative vs. generative classifiers. Theoretically, the paper considers the surrogate loss instead of the zero-one loss in analyses and generalizes the classical results from binary cases to multiclass ones. We show that, under mild assumptions, multiclass naive Bayes requires $O(\log n)$ samples to approach its asymptotic error while the corresponding multiclass logistic regression requires $O(n)$ samples, where $n$ is the feature dimension. To establish it, we present a multiclass $\mathcal{H}$-consistency bound framework and an explicit bound for logistic loss, which are of independent interests. Simulation results on a mixture of Gaussian validate our theoretical findings. Experiments on various pre-trained deep vision models show that naive Bayes consistently converges faster as the number of data increases. Besides, naive Bayes shows promise in few-shot cases and we observe the "two regimes” phenomenon in pre-trained supervised models. Our code is available at https://github.com/ML-GSAI/Revisiting-Dis-vs-Gen-Classifiers.
https://proceedings.mlr.press/v202/zheng23g.html
https://proceedings.mlr.press/v202/zheng23g/zheng23g.pdf
https://openreview.net/forum?id=6wfqx3CdKv
Evidential Interactive Learning for Medical Image Captioning
https://proceedings.mlr.press/v202/zheng23g.html
Ervine Zheng, Qi Yu
https://proceedings.mlr.press/v202/zheng23g.html
ICML 2023
Medical image captioning alleviates the burden of physicians and possibly reduces medical errors by automatically generating text descriptions to describe image contents and convey findings. It is more challenging than conventional image captioning due to the complexity of medical images and the difficulty of aligning image regions with medical terms. In this paper, we propose an evidential interactive learning framework that leverages evidence-based uncertainty estimation and interactive machine learning to improve image captioning with limited labeled data. The interactive learning process involves three stages: keyword prediction, caption generation, and model retraining. First, the model predicts a list of keywords with evidence-based uncertainty and selects the most informative keywords to seek user feedback. Second, user-approved keywords are used as model input to guide the model to generate satisfactory captions. Third, the model is updated based on user-approved keywords and captions, where evidence-based uncertainty is used to allocate different weights to different data instances. Experiments on two medical image datasets illustrate that the proposed framework can effectively learn from human feedback and improve the model’s performance in the future.
https://proceedings.mlr.press/v202/zheng23h.html
https://proceedings.mlr.press/v202/zheng23h/zheng23h.pdf
https://openreview.net/forum?id=KiUDs8yWX4
Finding the Missing-half: Graph Complementary Learning for Homophily-prone and Heterophily-prone Graphs
https://proceedings.mlr.press/v202/zheng23h.html
Yizhen Zheng, He Zhang, Vincent Lee, Yu Zheng, Xiao Wang, Shirui Pan
https://proceedings.mlr.press/v202/zheng23h.html
ICML 2023
Real-world graphs generally have only one kind of tendency in their connections. These connections are either homophilic-prone or heterophily-prone. While graphs with homophily-prone edges tend to connect nodes with the same class (i.e., intra-class nodes), heterophily-prone edges tend to build relationships between nodes with different classes (i.e., inter-class nodes). Existing GNNs only take the original graph as input during training. The problem with this approach is that it forgets to take into consideration the ”missing-half” structural information, that is, heterophily-prone topology for homophily-prone graphs and homophily-prone topology for heterophily-prone graphs. In our paper, we introduce Graph cOmplementAry Learning, namely GOAL, which consists of two components: graph complementation and complemented graph convolution. The first component finds the missing-half structural information for a given graph to complement it. The complemented graph has two sets of graphs including both homophily- and heterophily-prone topology. In the latter component, to handle complemented graphs, we design a new graph convolution from the perspective of optimisation. The experiment results show that GOAL consistently outperforms all baselines in eight real-world datasets.
https://proceedings.mlr.press/v202/zhou23a.html
https://proceedings.mlr.press/v202/zhou23a/zhou23a.pdf
https://openreview.net/forum?id=vu1c5FUSF0
Multi-agent Online Scheduling: MMS Allocations for Indivisible Items
https://proceedings.mlr.press/v202/zhou23a.html
Shengwei Zhou, Rufan Bai, Xiaowei Wu
https://proceedings.mlr.press/v202/zhou23a.html
ICML 2023
We consider the problem of fairly allocating a sequence of indivisible items that arrive online in an arbitrary order to a group of $n$ agents with additive normalized valuation functions, we consider the allocation of goods and chores separately and propose algorithms for approximating maximin share (MMS) allocations for both settings. When agents have identical valuation functions the problem coincides with the semi-online machine covering problem (when items are goods) and load balancing problem (when items are chores), for both of which optimal competitive ratios have been achieved. In this paper we consider the case when agents have general additive valuation functions. For the allocation of goods we show that no competitive algorithm exists even when there are only three agents and propose an optimal $0.5$-competitive algorithm for the case of two agents. For the allocation of chores we propose a $(2-1/n)$-competitive algorithm for $n\geq 3$ agents and a $\sqrt{2}\approx 1.414$-competitive algorithm for two agents. Additionally, we show that no algorithm can do better than $15/11\approx 1.364$-competitive for two agents.
https://proceedings.mlr.press/v202/zhou23b.html
https://proceedings.mlr.press/v202/zhou23b/zhou23b.pdf
https://openreview.net/forum?id=pDcjbSOcBu
Eliminating Adversarial Noise via Information Discard and Robust Representation Restoration
https://proceedings.mlr.press/v202/zhou23b.html
Dawei Zhou, Yukun Chen, Nannan Wang, Decheng Liu, Xinbo Gao, Tongliang Liu
https://proceedings.mlr.press/v202/zhou23b.html
ICML 2023
Deep neural networks (DNNs) are vulnerable to adversarial noise. Denoising model-based defense is a major protection strategy. However, denoising models may fail and induce negative effects in fully white-box scenarios. In this work, we start from the latent inherent properties of adversarial samples to break the limitations. Unlike solely learning a mapping from adversarial samples to natural samples, we aim to achieve denoising by destroying the spatial characteristics of adversarial noise and preserving the robust features of natural information. Motivated by this, we propose a defense based on information discard and robust representation restoration. Our method utilize complementary masks to disrupt adversarial noise and guided denoising models to restore robust-predictive representations from masked samples. Experimental results show that our method has competitive performance against white-box attacks and effectively reverses the negative effect of denoising models.
https://proceedings.mlr.press/v202/zhou23c.html
https://proceedings.mlr.press/v202/zhou23c/zhou23c.pdf
https://openreview.net/forum?id=eEbk8eEpjU
Brainformers: Trading Simplicity for Efficiency
https://proceedings.mlr.press/v202/zhou23c.html
Yanqi Zhou, Nan Du, Yanping Huang, Daiyi Peng, Chang Lan, Da Huang, Siamak Shakeri, David So, Andrew M. Dai, Yifeng Lu, Zhifeng Chen, Quoc V Le, Claire Cui, James Laudon, Jeff Dean
https://proceedings.mlr.press/v202/zhou23c.html
ICML 2023
Transformers are central to recent successes in natural language processing and computer vision. Transformers have a mostly uniform backbone where layers alternate between feed-forward and self-attention in order to build a deep network. Here we investigate this design choice and find that more complex blocks that have different permutations of layer primitives can be more efficient. Using this insight, we develop a complex block, named Brainformer, that consists of a diverse sets of layers such as sparsely gated feed-forward layers, dense feed-forward layers, attention layers, and various forms of layer normalization and activation functions. Brainformer consistently outperforms the state-of-the-art dense and sparse Transformers, in terms of both quality and efficiency. A Brainformer model with 8 billion activated parameters per token demonstrates 2x faster training convergence and 5x faster step time compared to its GLaM counterpart. In downstream task evaluation, Brainformer also demonstrates a 3% higher SuperGLUE score with fine-tuning compared to GLaM with a similar number of activated parameters. Finally, Brainformer largely outperforms a Primer dense model derived with NAS with similar computation per token on fewshot evaluations.
https://proceedings.mlr.press/v202/zhou23d.html
https://proceedings.mlr.press/v202/zhou23d/zhou23d.pdf
https://openreview.net/forum?id=dFhdAEjFAk
Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression
https://proceedings.mlr.press/v202/zhou23d.html
Mo Zhou, Rong Ge
https://proceedings.mlr.press/v202/zhou23d.html
ICML 2023
In deep learning, often the training process finds an interpolator (a solution with 0 training loss), but the test loss is still low. This phenomenon, known as benign overfitting, is a major mystery that received a lot of recent attention. One common mechanism for benign overfitting is implicit regularization, where the training process leads to additional properties for the interpolator, often characterized by minimizing certain norms. However, even for a simple sparse linear regression problem $y = \beta^{\ast\top} x +\xi$ with sparse $\beta^{\ast}$, neither minimum $\ell_1$ or $\ell_2$ norm interpolator gives the optimal test loss. In this work, we give a different parametrization of the model which leads to a new implicit regularization effect that combines the benefit of $\ell_1$ and $\ell_2$ interpolators. We show that training our new model via gradient descent leads to an interpolator with near-optimal test loss. Our result is based on careful analysis of the training dynamics and provides another example of implicit regularization effect that goes beyond norm minimization.
https://proceedings.mlr.press/v202/zhou23e.html
https://proceedings.mlr.press/v202/zhou23e/zhou23e.pdf
https://openreview.net/forum?id=Phjti0QbkZ
ODS: Test-Time Adaptation in the Presence of Open-World Data Shift
https://proceedings.mlr.press/v202/zhou23e.html
Zhi Zhou, Lan-Zhe Guo, Lin-Han Jia, Dingchu Zhang, Yu-Feng Li
https://proceedings.mlr.press/v202/zhou23e.html
ICML 2023
Test-time adaptation (TTA) adapts a source model to the distribution shift in testing data without using any source data. There have been plenty of algorithms concentrated on covariate shift in the last decade, i.e., $\mathcal{D}_t(X)$, the distribution of the test data is different from the source data. Nonetheless, in real application scenarios, it is necessary to consider the influence of label distribution shift, i.e., both $\mathcal{D}_t(X)$ and $\mathcal{D}_t(Y)$ are shifted, which has not been sufficiently explored yet. To remedy this, we study a new problem setup, namely, TTA with Open-world Data Shift (AODS). The goal of AODS is simultaneously adapting a model to covariate and label distribution shifts in the test phase. In this paper, we first analyze the relationship between classification error and distribution shifts. Motivated by this, we hence propose a new framework, namely ODS, which decouples the mixed distribution shift and then addresses covariate and label distribution shifts accordingly. We conduct experiments on multiple benchmarks with different types of shifts, and the results demonstrate the superior performance of our method against the state of the arts. Moreover, ODS is suitable for many TTA algorithms.
https://proceedings.mlr.press/v202/zhou23f.html
https://proceedings.mlr.press/v202/zhou23f/zhou23f.pdf
https://openreview.net/forum?id=XMer44w2u9
Fourmer: An Efficient Global Modeling Paradigm for Image Restoration
https://proceedings.mlr.press/v202/zhou23f.html
Man Zhou, Jie Huang, Chun-Le Guo, Chongyi Li
https://proceedings.mlr.press/v202/zhou23f.html
ICML 2023
Global modeling-based image restoration frameworks have become popular. However, they often require a high memory footprint and do not consider task-specific degradation. Our work presents an alternative approach to global modeling that is more efficient for image restoration. The key insights which motivate our study are two-fold: 1) Fourier transform is capable of disentangling image degradation and content component to a certain extent, serving as the image degradation prior, and 2) Fourier domain innately embraces global properties, where each pixel in the Fourier space is involved with all spatial pixels. While adhering to the “spatial interaction + channel evolution” rule of previous studies, we customize the core designs with Fourier spatial interaction modeling and Fourier channel evolution. Our paradigm, Fourmer, achieves competitive performance on common image restoration tasks such as image de-raining, image enhancement, image dehazing, and guided image super-resolution, while requiring fewer computational resources. The code for Fourmer will be made publicly available.
https://proceedings.mlr.press/v202/zhou23g.html
https://proceedings.mlr.press/v202/zhou23g/zhou23g.pdf
https://openreview.net/forum?id=DBlKltQIO0
Controlled Text Generation with Natural Language Instructions
https://proceedings.mlr.press/v202/zhou23g.html
Wangchunshu Zhou, Yuchen Eleanor Jiang, Ethan Wilcox, Ryan Cotterell, Mrinmaya Sachan
https://proceedings.mlr.press/v202/zhou23g.html
ICML 2023
Large language models can be prompted to pro- duce fluent output for a wide range of tasks without being specifically trained to do so. Nevertheless, it is notoriously difficult to control their generation in such a way that it satisfies user-specified constraints. In this paper, we present InstructCTG, a simple controlled text generation framework that incorporates different constraints by verbalizing them as natural language instructions. We annotate natural texts through a combination of off-the-shelf NLP tools and simple heuristics with the linguistic and extra-linguistic constraints they satisfy. Then, we verbalize the constraints into natural language instructions to form weakly supervised training data, i.e., we prepend the natural language verbalizations of the constraints in front of their corresponding natural language sentences. Next, we fine-tune a pre-trained language model on the augmented corpus. Compared to existing methods, InstructCTG is more flexible in terms of the types of constraints it allows the practitioner to use. It also does not require any modification of the decoding procedure. Finally, InstructCTG allows the model to adapt to new constraints without re-training through the use of in-context learning.
https://proceedings.mlr.press/v202/zhou23h.html
https://proceedings.mlr.press/v202/zhou23h/zhou23h.pdf
https://openreview.net/forum?id=be9T7nuBNi
NNSplitter: An Active Defense Solution for DNN Model via Automated Weight Obfuscation
https://proceedings.mlr.press/v202/zhou23h.html
Tong Zhou, Yukui Luo, Shaolei Ren, Xiaolin Xu
https://proceedings.mlr.press/v202/zhou23h.html
ICML 2023
As a type of valuable intellectual property (IP), deep neural network (DNN) models have been protected by techniques like watermarking. However, such passive model protection cannot fully prevent model abuse. In this work, we propose an active model IP protection scheme, namely NNSplitter, which actively protects the model by splitting it into two parts: the obfuscated model that performs poorly due to weight obfuscation, and the model secrets consisting of the indexes and original values of the obfuscated weights, which can only be accessed by authorized users with the support of the trusted execution environment. Experimental results demonstrate the effectiveness of NNSplitter, e.g., by only modifying 275 out of over 11 million (i.e., 0.002%) weights, the accuracy of the obfuscated ResNet-18 model on CIFAR-10 can drop to 10%. Moreover, NNSplitter is stealthy and resilient against norm clipping and fine-tuning attacks, making it an appealing solution for DNN model protection. The code is available at: https://github.com/Tongzhou0101/NNSplitter.
https://proceedings.mlr.press/v202/zhou23i.html
https://proceedings.mlr.press/v202/zhou23i/zhou23i.pdf
https://openreview.net/forum?id=juHlutJcm6
Deep Latent State Space Models for Time-Series Generation
https://proceedings.mlr.press/v202/zhou23i.html
Linqi Zhou, Michael Poli, Winnie Xu, Stefano Massaroli, Stefano Ermon
https://proceedings.mlr.press/v202/zhou23i.html
ICML 2023
Methods based on ordinary differential equations (ODEs) are widely used to build generative models of time-series. In addition to high computational overhead due to explicitly computing hidden states recurrence, existing ODE-based models fall short in learning sequence data with sharp transitions - common in many real-world systems - due to numerical challenges during optimization. In this work, we propose LS4, a generative model for sequences with latent variables evolving according to a state space ODE to increase modeling capacity. Inspired by recent deep state space models (S4), we achieve speedups by leveraging a convolutional representation of LS4 which bypasses the explicit evaluation of hidden states. We show that LS4 significantly outperforms previous continuous-time generative models in terms of marginal distribution, classification, and prediction scores on real-world datasets in the Monash Forecasting Repository, and is capable of modeling highly stochastic data with sharp temporal transitions. LS4 sets state-of-the-art for continuous-time latent generative models, with significant improvement of mean squared error and tighter variational lower bounds on irregularly-sampled datasets, while also being x100 faster than other baselines on long sequences.
https://proceedings.mlr.press/v202/zhou23j.html
https://proceedings.mlr.press/v202/zhou23j/zhou23j.pdf
https://openreview.net/forum?id=G6L1kwy9AA
SlotGAT: Slot-based Message Passing for Heterogeneous Graphs
https://proceedings.mlr.press/v202/zhou23j.html
Ziang Zhou, Jieming Shi, Renchi Yang, Yuanhang Zou, Qing Li
https://proceedings.mlr.press/v202/zhou23j.html
ICML 2023
Heterogeneous graphs are ubiquitous to model complex data. There are urgent needs on powerful heterogeneous graph neural networks to effectively support important applications. We identify a potential semantic mixing issue in existing message passing processes, where the representations of the neighbors of a node v are forced to be transformed to the feature space of v for aggregation, though the neighbors are in different types. That is, the semantics in different node types are entangled together into node v’s representation. To address the issue, we propose SlotGAT with separate message passing processes in slots, one for each node type, to maintain the representations in their own node-type feature spaces. Moreover, in a slot-based message passing layer, we design an attention mechanism for effective slot-wise message aggregation. Further, we develop a slot attention technique after the last layer of SlotGAT, to learn the importance of different slots in downstream tasks. Our analysis indicates that the slots in SlotGAT can preserve different semantics in various feature spaces. The superiority of SlotGAT is evaluated against 13 baselines on 6 datasets for node classification and link prediction. Our code is at https://github.com/scottjiao/SlotGAT_ICML23/.
https://proceedings.mlr.press/v202/zhou23k.html
https://proceedings.mlr.press/v202/zhou23k/zhou23k.pdf
https://openreview.net/forum?id=p6T3omuNZK
Fast Online Node Labeling for Very Large Graphs
https://proceedings.mlr.press/v202/zhou23k.html
Baojian Zhou, Yifan Sun, Reza Babanezhad Harikandeh
https://proceedings.mlr.press/v202/zhou23k.html
ICML 2023
This paper studies the online node classification problem under a transductive learning setting. Current methods either invert a graph kernel matrix with $\mathcal{O}(n^3)$ runtime and $\mathcal{O}(n^2)$ space complexity or sample a large volume of random spanning trees, thus are difficult to scale to large graphs. In this work, we propose an improvement based on the online relaxation technique introduced by a series of works (Rakhlin et al., 2012; Rakhlin & Sridharan, 2015; 2017). We first prove an effective regret $\mathcal{O}(\sqrt{n^{1+\gamma}})$ when suitable parameterized graph kernels are chosen, then propose an approximate algorithm FastONL enjoying $\mathcal{O}(k\sqrt{n^{1+\gamma}})$ regret based on this relaxation. The key of FastONL is a generalized local push method that effectively approximates inverse matrix columns and applies to a series of popular kernels. Furthermore, the per-prediction cost is $\mathcal{O}(\operatorname{vol}{\mathcal{S}}\log 1/\epsilon)$ locally dependent on the graph with linear memory cost. Experiments show that our scalable method enjoys a better tradeoff between local and global consistency.
https://proceedings.mlr.press/v202/zhou23l.html
https://proceedings.mlr.press/v202/zhou23l/zhou23l.pdf
https://openreview.net/forum?id=t08AihqKPQ
Horizon-Free and Variance-Dependent Reinforcement Learning for Latent Markov Decision Processes
https://proceedings.mlr.press/v202/zhou23l.html
Runlong Zhou, Ruosong Wang, Simon Shaolei Du
https://proceedings.mlr.press/v202/zhou23l.html
ICML 2023
We study regret minimization for reinforcement learning (RL) in Latent Markov Decision Processes (LMDPs) with context in hindsight. We design a novel model-based algorithmic framework which can be instantiated with both a model-optimistic and a value-optimistic solver. We prove an $\tilde{O}(\sqrt{\mathsf{Var}^\star M \Gamma S A K})$ regret bound where $\tilde{O}$ hides logarithm factors, $M$ is the number of contexts, $S$ is the number of states, $A$ is the number of actions, $K$ is the number of episodes, $\Gamma \le S$ is the maximum transition degree of any state-action pair, and $\mathsf{Var}^\star$ is a variance quantity describing the determinism of the LMDP. The regret bound only scales logarithmically with the planning horizon, thus yielding the first (nearly) horizon-free regret bound for LMDP. This is also the first problem-dependent regret bound for LMDP. Key in our proof is an analysis of the total variance of alpha vectors (a generalization of value functions), which is handled with a truncation method. We complement our positive result with a novel $\Omega(\sqrt{\mathsf{Var}^\star M S A K})$ regret lower bound with $\Gamma = 2$, which shows our upper bound minimax optimal when $\Gamma$ is a constant for the class of variance-bounded LMDPs. Our lower bound relies on new constructions of hard instances and an argument inspired by the symmetrization technique from theoretical computer science, both of which are technically different from existing lower bound proof for MDPs, and thus can be of independent interest.