abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/chen23ae.html | https://proceedings.mlr.press/v202/chen23ae/chen23ae.pdf | https://openreview.net/forum?id=9afHiUivJR | Lower Bounds for Learning in Revealing POMDPs | https://proceedings.mlr.press/v202/chen23ae.html | Fan Chen, Huan Wang, Caiming Xiong, Song Mei, Yu Bai | https://proceedings.mlr.press/v202/chen23ae.html | ICML 2023 | This paper studies the fundamental limits of reinforcement learning (RL) in the challenging partially observable setting. While it is well-established that learning in Partially Observable Markov Decision Processes (POMDPs) requires exponentially many samples in the worst case, a surge of recent work shows that polynomial sample complexities are achievable under the revealing condition—A natural condition that requires the observables to reveal some information about the unobserved latent states. However, the fundamental limits for learning in revealing POMDPs are much less understood, with existing lower bounds being rather preliminary and having substantial gaps from the current best upper bounds. We establish strong PAC and regret lower bounds for learning in revealing POMDPs. Our lower bounds scale polynomially in all relevant problem parameters in a multiplicative fashion, and achieve significantly smaller gaps against the current best upper bounds, providing a solid starting point for future studies. In particular, for multi-step revealing POMDPs, we show that (1) the latent state-space dependence is at least $\Omega(S^{1.5})$ in the PAC sample complexity, which is notably harder than the $\widetilde{\Theta}(S)$ scaling for fully-observable MDPs; (2) Any polynomial sublinear regret is at least $\Omega(T^{2/3})$, suggesting its fundamental difference from the single-step case where $\widetilde{\mathcal{O}}(\sqrt{T})$ regret is achievable. Technically, our hard instance construction adapts techniques in distribution testing, which is new to the RL literature and may be of independent interest. We also complement our results with new sharp regret upper bounds for strongly B-stable PSRs, which include single-step revealing POMDPs as a special case. |
https://proceedings.mlr.press/v202/chen23af.html | https://proceedings.mlr.press/v202/chen23af/chen23af.pdf | https://openreview.net/forum?id=7BO6rpA6qQ | Implicit Neural Spatial Representations for Time-dependent PDEs | https://proceedings.mlr.press/v202/chen23af.html | Honglin Chen, Rundi Wu, Eitan Grinspun, Changxi Zheng, Peter Yichen Chen | https://proceedings.mlr.press/v202/chen23af.html | ICML 2023 | Implicit Neural Spatial Representation (INSR) has emerged as an effective representation of spatially-dependent vector fields. This work explores solving time-dependent PDEs with INSR. Classical PDE solvers introduce both temporal and spatial discretizations. Common spatial discretizations include meshes and meshless point clouds, where each degree-of-freedom corresponds to a location in space. While these explicit spatial correspondences are intuitive to model and understand, these representations are not necessarily optimal for accuracy, memory usage, or adaptivity. Keeping the classical temporal discretization unchanged (e.g., explicit/implicit Euler), we explore INSR as an alternative spatial discretization, where spatial information is implicitly stored in the neural network weights. The network weights then evolve over time via time integration. Our approach does not require any training data generated by existing solvers because our approach is the solver itself. We validate our approach on various PDEs with examples involving large elastic deformations, turbulent fluids, and multi-scale phenomena. While slower to compute than traditional representations, our approach exhibits higher accuracy and lower memory consumption. Whereas classical solvers can dynamically adapt their spatial representation only by resorting to complex remeshing algorithms, our INSR approach is intrinsically adaptive. By tapping into the rich literature of classic time integrators, e.g., operator-splitting schemes, our method enables challenging simulations in contact mechanics and turbulent flows where previous neural-physics approaches struggle. Videos and codes are available on the project page: http://www.cs.columbia.edu/cg/INSR-PDE/ |
https://proceedings.mlr.press/v202/chen23ag.html | https://proceedings.mlr.press/v202/chen23ag/chen23ag.pdf | https://openreview.net/forum?id=Fj0PRtd4e6 | BEATs: Audio Pre-Training with Acoustic Tokenizers | https://proceedings.mlr.press/v202/chen23ag.html | Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, Wanxiang Che, Xiangzhan Yu, Furu Wei | https://proceedings.mlr.press/v202/chen23ag.html | ICML 2023 | We introduce a self-supervised learning (SSL) framework BEATs for general audio representation pre-training, where we optimize an acoustic tokenizer and an audio SSL model by iterations. Unlike the previous audio SSL models that employ reconstruction loss for pre-training, our audio SSL model is trained with the discrete label prediction task, where the labels are generated by a semantic-rich acoustic tokenizer. We propose an iterative pipeline to jointly optimize the tokenizer and the pre-trained model, aiming to abstract high-level semantics and discard the redundant details for audio. The experimental results demonstrate our acoustic tokenizers can generate discrete labels with rich audio semantics and our audio SSL models achieve state-of-the-art (SOTA) results across various audio classification benchmarks, even outperforming previous models that use more training data and model parameters significantly. Specifically, we set a new SOTA mAP 50.6% on AudioSet-2M without using any external data, and 98.1% accuracy on ESC-50. The code and pre-trained models are available at https://aka.ms/beats. |
https://proceedings.mlr.press/v202/chen23ah.html | https://proceedings.mlr.press/v202/chen23ah/chen23ah.pdf | https://openreview.net/forum?id=xDIppoiFrA | Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model | https://proceedings.mlr.press/v202/chen23ah.html | Siyu Chen, Jibang Wu, Yifan Wu, Zhuoran Yang | https://proceedings.mlr.press/v202/chen23ah.html | ICML 2023 | We study the incentivized information acquisition problem, where a principal hires an agent to gather information on her behalf. Such a problem is modeled as a Stackelberg game between the principal and the agent, where the principal announces a scoring rule that specifies the payment, and then the agent then chooses an effort level that maximizes her own profit and reports the information. We study the online setting of such a problem from the principal’s perspective, i.e., designing the optimal scoring rule by repeatedly interacting with the strategic agent. We design a provably sample efficient algorithm that tailors the UCB algorithm (Auer et al., 2002) to our model, which achieves a $\mathcal{O} (K^2\cdot T^{2/3})$ regret after $T$ iterations, where $K$ is the number of effort levels of the agent. Our algorithm features a delicate estimation procedure for the optimal profit of the principal, and a conservative correction scheme that ensures the desired agent’s actions are incentivized. Furthermore, a key feature of our regret bound is that it is independent of the number of states of the environment. |
https://proceedings.mlr.press/v202/chen23ai.html | https://proceedings.mlr.press/v202/chen23ai/chen23ai.pdf | https://openreview.net/forum?id=VqnEAUnfvu | Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization | https://proceedings.mlr.press/v202/chen23ai.html | Lesi Chen, Jing Xu, Luo Luo | https://proceedings.mlr.press/v202/chen23ai.html | ICML 2023 | We consider the optimization problem of the form $\min_{x \in \mathbb{R}^d} f(x) \triangleq \mathbb{E}[F(x;\xi)]$ , where the component $F(x;\xi)$ is $L$-mean-squared Lipschitz but possibly nonconvex and nonsmooth.The recently proposed gradient-free method requires at most $\mathcal{O}( L^4 d^{3/2} \epsilon^{-4} + \Delta L^3 d^{3/2} \delta^{-1} \epsilon^{-4})$ stochastic zeroth-order oracle complexity to find a $(\delta,\epsilon)$-Goldstein stationary point of objective function, where $\Delta = f(x_0) - \inf_{x \in \mathbb{R}^d} f(x)$ and $x_0$ is the initial point of the algorithm. This paper proposes a more efficient algorithm using stochastic recursive gradient estimators, which improves the complexity to $\mathcal{O}(L^3 d^{3/2} \epsilon^{-3}+ \Delta L^2 d^{3/2} \delta^{-1} \epsilon^{-3})$. |
https://proceedings.mlr.press/v202/chen23aj.html | https://proceedings.mlr.press/v202/chen23aj/chen23aj.pdf | https://openreview.net/forum?id=ieSN7Xyo8g | Efficient Personalized Federated Learning via Sparse Model-Adaptation | https://proceedings.mlr.press/v202/chen23aj.html | Daoyuan Chen, Liuyi Yao, Dawei Gao, Bolin Ding, Yaliang Li | https://proceedings.mlr.press/v202/chen23aj.html | ICML 2023 | Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data. Due to the heterogeneity of clients’ local data distribution, recent studies explore the personalized FL that learns and deploys distinct local models with the help of auxiliary global models. However, the clients can be heterogeneous in terms of not only local data distribution, but also their computation and communication resources. The capacity and efficiency of personalized models are restricted by the lowest-resource clients, leading to sub-optimal performance and limited practicality of personalized FL. To overcome these challenges, we propose a novel approach named pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models. With a lightweight trainable gating layer, pFedGate enables clients to reach their full potential in model capacity by generating different sparse models accounting for both the heterogeneous data distributions and resource constraints. Meanwhile, the computation and communication efficiency are both improved thanks to the adaptability between the model sparsity and clients’ resources. Further, we theoretically show that the proposed pFedGate has superior complexity with guaranteed convergence and generalization error. Extensive experiments show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods. We also demonstrate that pFedGate performs better than competitors in the novel clients participation and partial clients participation scenarios, and can learn meaningful sparse local models adapted to different data distributions. |
https://proceedings.mlr.press/v202/chen23ak.html | https://proceedings.mlr.press/v202/chen23ak/chen23ak.pdf | https://openreview.net/forum?id=8lCz8flXkr | A Gromov-Wasserstein Geometric View of Spectrum-Preserving Graph Coarsening | https://proceedings.mlr.press/v202/chen23ak.html | Yifan Chen, Rentian Yao, Yun Yang, Jie Chen | https://proceedings.mlr.press/v202/chen23ak.html | ICML 2023 | Graph coarsening is a technique for solving large-scale graph problems by working on a smaller version of the original graph, and possibly interpolating the results back to the original graph. It has a long history in scientific computing and has recently gained popularity in machine learning, particularly in methods that preserve the graph spectrum. This work studies graph coarsening from a different perspective, developing a theory for preserving graph distances and proposing a method to achieve this. The geometric approach is useful when working with a collection of graphs, such as in graph classification and regression. In this study, we consider a graph as an element on a metric space equipped with the Gromov–Wasserstein (GW) distance, and bound the difference between the distance of two graphs and their coarsened versions. Minimizing this difference can be done using the popular weighted kernel $K$-means method, which improves existing spectrum-preserving methods with the proper choice of the kernel. The study includes a set of experiments to support the theory and method, including approximating the GW distance, preserving the graph spectrum, classifying graphs using spectral information, and performing regression using graph convolutional networks. Code is available at https://github.com/ychen-stat-ml/GW-Graph-Coarsening. |
https://proceedings.mlr.press/v202/chen23al.html | https://proceedings.mlr.press/v202/chen23al/chen23al.pdf | https://openreview.net/forum?id=fB9YIfI6WQ | How to address monotonicity for model risk management? | https://proceedings.mlr.press/v202/chen23al.html | Dangxing Chen, Weicheng Ye | https://proceedings.mlr.press/v202/chen23al.html | ICML 2023 | In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair. |
https://proceedings.mlr.press/v202/chen23am.html | https://proceedings.mlr.press/v202/chen23am/chen23am.pdf | https://openreview.net/forum?id=WmAfdffvfe | Sketched Ridgeless Linear Regression: The Role of Downsampling | https://proceedings.mlr.press/v202/chen23am.html | Xin Chen, Yicheng Zeng, Siyue Yang, Qiang Sun | https://proceedings.mlr.press/v202/chen23am.html | ICML 2023 | Overparametrization often helps improve the generalization performance. This paper presents a dual view of overparametrization suggesting that downsampling may also help generalize. Focusing on the proportional regime $m\asymp n \asymp p$, where $m$ represents the sketching size, $n$ is the sample size, and $p$ is the feature dimensionality, we investigate two out-of-sample prediction risks of the sketched ridgeless least square estimator. Our findings challenge conventional beliefs by showing that downsampling does not always harm generalization but can actually improve it in certain cases. We identify the optimal sketching size that minimizes out-of-sample prediction risks and demonstrate that the optimally sketched estimator exhibits stabler risk curves, eliminating the peaks of those for the full-sample estimator. To facilitate practical implementation, we propose an empirical procedure to determine the optimal sketching size. Finally, we extend our analysis to cover central limit theorems and misspecified models. Numerical studies strongly support our theory. |
https://proceedings.mlr.press/v202/chen23an.html | https://proceedings.mlr.press/v202/chen23an/chen23an.pdf | https://openreview.net/forum?id=pLQoqbUTue | Context-Aware Bayesian Network Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning | https://proceedings.mlr.press/v202/chen23an.html | Dingyang Chen, Qi Zhang | https://proceedings.mlr.press/v202/chen23an.html | ICML 2023 | Executing actions in a correlated manner is a common strategy for human coordination that often leads to better cooperation, which is also potentially beneficial for cooperative multi-agent reinforcement learning (MARL). However, the recent success of MARL relies heavily on the convenient paradigm of purely decentralized execution, where there is no action correlation among agents for scalability considerations. In this work, we introduce a Bayesian network to inaugurate correlations between agents’ action selections in their joint policy. Theoretically, we establish a theoretical justification for why action dependencies are beneficial by deriving the multi-agent policy gradient formula under such a Bayesian network joint policy and proving its global convergence to Nash equilibria under tabular softmax policy parameterization in cooperative Markov games. Further, by equipping existing MARL algorithms with a recent method of differentiable directed acyclic graphs (DAGs), we develop practical algorithms to learn the context-aware Bayesian network policies in scenarios with partial observability and various difficulty. We also dynamically decrease the sparsity of the learned DAG throughout the training process, which leads to weakly or even purely independent policies for decentralized execution. Empirical results on a range of MARL benchmarks show the benefits of our approach. |
https://proceedings.mlr.press/v202/chen23ao.html | https://proceedings.mlr.press/v202/chen23ao/chen23ao.pdf | https://openreview.net/forum?id=CUORPu6abU | Bidirectional Learning for Offline Model-based Biological Sequence Design | https://proceedings.mlr.press/v202/chen23ao.html | Can Chen, Yingxue Zhang, Xue Liu, Mark Coates | https://proceedings.mlr.press/v202/chen23ao.html | ICML 2023 | Offline model-based optimization aims to maximize a black-box objective function with a static dataset of designs and their scores. In this paper, we focus on biological sequence design to maximize some sequence score. A recent approach employs bidirectional learning, combining a forward mapping for exploitation and a backward mapping for constraint, and it relies on the neural tangent kernel (NTK) of an infinitely wide network to build a proxy model. Though effective, the NTK cannot learn features because of its parametrization, and its use prevents the incorporation of powerful pre-trained Language Models (LMs) that can capture the rich biophysical information in millions of biological sequences. We adopt an alternative proxy model, adding a linear head to a pre-trained LM, and propose a linearization scheme. This yields a closed-form loss and also takes into account the biophysical information in the pre-trained LM. In addition, the forward mapping and the backward mapping play different roles and thus deserve different weights during sequence optimization. To achieve this, we train an auxiliary model and leverage its weak supervision signal via a bi-level optimization framework to effectively learn how to balance the two mappings. Further, by extending the framework, we develop the first learning rate adaptation module Adaptive-$\eta$, which is compatible with all gradient-based algorithms for offline model-based optimization. Experimental results on DNA/protein sequence design tasks verify the effectiveness of our algorithm. Our code is available at https://github.com/GGchen1997/BIB-ICML2023-Submission. |
https://proceedings.mlr.press/v202/chen23ap.html | https://proceedings.mlr.press/v202/chen23ap/chen23ap.pdf | https://openreview.net/forum?id=Uj5AVsHkoX | Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling | https://proceedings.mlr.press/v202/chen23ap.html | Tianqi Chen, Mingyuan Zhou | https://proceedings.mlr.press/v202/chen23ap.html | ICML 2023 | Learning to denoise has emerged as a prominent paradigm to design state-of-the-art deep generative models for natural images. How to use it to model the distributions of both continuous real-valued data and categorical data has been well studied in recently proposed diffusion models. However, it is found in this paper to have limited ability in modeling some other types of data, such as count and non-negative continuous data, that are often highly sparse, skewed, heavy-tailed, and/or overdispersed. To this end, we propose learning to jump as a general recipe for generative modeling of various types of data. Using a forward count thinning process to construct learning objectives to train a deep neural network, it employs a reverse count thickening process to iteratively refine its generation through that network. We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better. For example, learning to jump is recommended when the training data is non-negative and exhibits strong sparsity, skewness, heavy-tailedness, and/or heterogeneity. |
https://proceedings.mlr.press/v202/chen23aq.html | https://proceedings.mlr.press/v202/chen23aq/chen23aq.pdf | https://openreview.net/forum?id=Q4QFG5Fe4O | Lifelong Language Pretraining with Distribution-Specialized Experts | https://proceedings.mlr.press/v202/chen23aq.html | Wuyang Chen, Yanqi Zhou, Nan Du, Yanping Huang, James Laudon, Zhifeng Chen, Claire Cui | https://proceedings.mlr.press/v202/chen23aq.html | ICML 2023 | Pretraining on a large-scale corpus has become a standard method to build general language models (LMs). Adapting a model to new data distributions targeting different downstream tasks poses significant challenges. Naive fine-tuning may incur catastrophic forgetting when the over-parameterized LMs overfit the new data but fail to preserve the pretrained features. Lifelong learning (LLL) aims to enable information systems to learn from a continuous data stream across time. However, most prior work modifies the training recipe assuming a static fixed network architecture. We find that additional model capacity and proper regularization are key elements to achieving strong LLL performance. Thus, we propose Lifelong-MoE, an extensible MoE (Mixture-of-Experts) architecture that dynamically adds model capacity via adding experts with regularized pretaining. Our results show that by only introducing a limited number of extra experts while keeping the computation cost constant, our model can steadily adapt to data distribution shifts while preserving the previous knowledge. Compared to existing lifelong learning approaches, Lifelong-MoE achieves better few-shot performance on NLP tasks. More impressively, Lifelong-MoE surpasses multi-task learning on 19 downstream NLU tasks. |
https://proceedings.mlr.press/v202/chen23ar.html | https://proceedings.mlr.press/v202/chen23ar/chen23ar.pdf | https://openreview.net/forum?id=v50lXuFMW0 | Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization | https://proceedings.mlr.press/v202/chen23ar.html | Ziyi Chen, Yi Zhou, Yingbin Liang, Zhaosong Lu | https://proceedings.mlr.press/v202/chen23ar.html | ICML 2023 | Various optimal gradient-based algorithms have been developed for smooth nonconvex optimization. However, many nonconvex machine learning problems do not belong to the class of smooth functions and therefore the existing algorithms are sub-optimal. Instead, these problems have been shown to satisfy certain generalized-smooth conditions, which have not been well understood in the existing literature. In this paper, we propose a notion of $\alpha$-symmetric generalized-smoothness that substantially extends the existing notions and covers many important functions such as high-order polynomials and exponential functions. We study the fundamental properties and establish descent lemmas for the functions in this class. Then, to solve such a large class of nonconvex problems, we design a special deterministic normalized gradient descent algorithm that achieves the optimal iteration complexity $\mathcal{O}(\epsilon^{-2})$, and also prove that the popular SPIDER variance reduction algorithm achieves the optimal sample complexity $\mathcal{O}(\epsilon^{-3})$. Our results show that solving generalized-smooth nonconvex problems is as efficient as solving smooth nonconvex problems. |
https://proceedings.mlr.press/v202/cheng23a.html | https://proceedings.mlr.press/v202/cheng23a/cheng23a.pdf | https://openreview.net/forum?id=O2XerBwfFk | Weakly Supervised Regression with Interval Targets | https://proceedings.mlr.press/v202/cheng23a.html | Xin Cheng, Yuzhou Cao, Ximing Li, Bo An, Lei Feng | https://proceedings.mlr.press/v202/cheng23a.html | ICML 2023 | This paper investigates an interesting weakly supervised regression setting called regression with interval targets (RIT). Although some of the previous methods on relevant regression settings can be adapted to RIT, they are not statistically consistent, and thus their empirical performance is not guaranteed. In this paper, we provide a thorough study on RIT. First, we proposed a novel statistical model to describe the data generation process for RIT and demonstrate its validity. Second, we analyze a simple selecting method for RIT, which selects a particular value in the interval as the target value to train the model. Third, we propose a statistically consistent limiting method for RIT to train the model by limiting the predictions to the interval. We further derive an estimation error bound for our limiting method. Finally, extensive experiments on various datasets demonstrate the effectiveness of our proposed method. |
https://proceedings.mlr.press/v202/cheng23b.html | https://proceedings.mlr.press/v202/cheng23b/cheng23b.pdf | https://openreview.net/forum?id=2jvwyTm6Pk | PLay: Parametrically Conditioned Layout Generation using Latent Diffusion | https://proceedings.mlr.press/v202/cheng23b.html | Chin-Yi Cheng, Forrest Huang, Gang Li, Yang Li | https://proceedings.mlr.press/v202/cheng23b.html | ICML 2023 | Layout design is an important task in various design fields, including user interfaces, document, and graphic design. As this task requires tedious manual effort by designers, prior works have attempted to automate this process using generative models, but commonly fell short of providing intuitive user controls and achieving design objectives. In this paper, we build a conditional latent diffusion model, PLay, that generates parametrically conditioned layouts in vector graphic space from user-specified guidelines, which are commonly used by designers for representing their design intents in current practices. Our method outperforms prior works across three datasets on metrics including FID and FD-VG, and in user test. Moreover, it brings a novel and interactive experience to professional layout design processes. |
https://proceedings.mlr.press/v202/cheng23c.html | https://proceedings.mlr.press/v202/cheng23c/cheng23c.pdf | https://openreview.net/forum?id=HBrQI0tX8F | Identification of the Adversary from a Single Adversarial Example | https://proceedings.mlr.press/v202/cheng23c.html | Minhao Cheng, Rui Min, Haochen Sun, Pin-Yu Chen | https://proceedings.mlr.press/v202/cheng23c.html | ICML 2023 | Deep neural networks have been shown vulnerable to adversarial examples. Even though many defense methods have been proposed to enhance the robustness, it is still a long way toward providing an attack-free method to build a trustworthy machine learning system. In this paper, instead of enhancing the robustness, we take the investigator’s perspective and propose a new framework to trace the first compromised model copy in a forensic investigation manner. Specifically, we focus on the following setting: the machine learning service provider provides model copies for a set of customers. However, one of the customers conducted adversarial attacks to fool the system. Therefore, the investigator’s objective is to identify the first compromised copy by collecting and analyzing evidence from only available adversarial examples. To make the tracing viable, we design a random mask watermarking mechanism to differentiate adversarial examples from different copies. First, we propose a tracing approach in the data-limited case where the original example is also available. Then, we design a data-free approach to identify the adversary without accessing the original example. Finally, the effectiveness of our proposed framework is evaluated by extensive experiments with different model architectures, adversarial attacks, and datasets. |
https://proceedings.mlr.press/v202/cheng23d.html | https://proceedings.mlr.press/v202/cheng23d/cheng23d.pdf | https://openreview.net/forum?id=pLky79p1Ne | Parallel Online Clustering of Bandits via Hedonic Game | https://proceedings.mlr.press/v202/cheng23d.html | Xiaotong Cheng, Cheng Pan, Setareh Maghsudi | https://proceedings.mlr.press/v202/cheng23d.html | ICML 2023 | Contextual bandit algorithms appear in several applications, such as online advertisement and recommendation systems like personalized education or personalized medicine. Individually-tailored recommendations boost the performance of the underlying application; nevertheless, providing individual suggestions becomes costly and even implausible as the number of users grows. As such, to efficiently serve the demands of several users in modern applications, it is imperative to identify the underlying users’ clusters, i.e., the groups of users for which a single recommendation might be (near-)optimal. We propose CLUB-HG, a novel algorithm that integrates a game-theoretic approach into clustering inference. Our algorithm achieves Nash equilibrium at each inference step and discovers the underlying clusters. We also provide regret analysis within a standard linear stochastic noise setting. Finally, experiments on synthetic and real-world datasets show the superior performance of our proposed algorithm compared to the state-of-the-art algorithms. |
https://proceedings.mlr.press/v202/cheng23e.html | https://proceedings.mlr.press/v202/cheng23e/cheng23e.pdf | https://openreview.net/forum?id=eIQIcUKs0T | Mu$^2$SLAM: Multitask, Multilingual Speech and Language Models | https://proceedings.mlr.press/v202/cheng23e.html | Yong Cheng, Yu Zhang, Melvin Johnson, Wolfgang Macherey, Ankur Bapna | https://proceedings.mlr.press/v202/cheng23e.html | ICML 2023 | We present Mu$^2$SLAM, a multilingual sequence-to-sequence model pre-trained jointly on unlabeled speech, unlabeled text and supervised data spanning Automatic Speech Recognition (ASR), Automatic Speech Translation (AST) and Machine Translation (MT), in over 100 languages. By leveraging a quantized representation of speech as a target, Mu$^2$SLAM trains the speech-text models with a sequence-to-sequence masked denoising objective similar to T5 on the decoder and a masked language modeling objective (MLM) on the encoder, for both unlabeled speech and text, while utilizing the supervised tasks to improve cross-lingual and cross-modal representation alignment within the model. On CoVoST AST, Mu$^2$SLAM establishes a new state-of-the-art for models trained on public datasets, improving on xx-en translation over the previous best by 1.9 BLEU points and on en-xx translation by 1.1 BLEU points. On Voxpopuli ASR, our model matches the performance of an mSLAM model fine-tuned with an RNN-T decoder, despite using a relatively weaker Transformer decoder. On text understanding tasks, our model improves by more than 6% over mSLAM on XNLI, getting closer to the performance of mT5 models of comparable capacity on XNLI and TydiQA, paving the way towards a single model for all speech and text understanding tasks. |
https://proceedings.mlr.press/v202/cheng23f.html | https://proceedings.mlr.press/v202/cheng23f/cheng23f.pdf | https://openreview.net/forum?id=haYpY2kDAb | Understanding the Role of Feedback in Online Learning with Switching Costs | https://proceedings.mlr.press/v202/cheng23f.html | Duo Cheng, Xingyu Zhou, Bo Ji | https://proceedings.mlr.press/v202/cheng23f.html | ICML 2023 | In this paper, we study the role of feedback in online learning with switching costs. It has been shown that the minimax regret is $\widetilde{\Theta}(T^{2/3})$ under bandit feedback and improves to $\widetilde{\Theta}(\sqrt{T})$ under full-information feedback, where $T$ is the length of the time horizon. However, it remains largely unknown how the amount and type of feedback generally impact regret. To this end, we first consider the setting of bandit learning with extra observations; that is, in addition to the typical bandit feedback, the learner can freely make a total of $B_{\mathrm{ex}}$ extra observations. We fully characterize the minimax regret in this setting, which exhibits an interesting phase-transition phenomenon: when $B_{\mathrm{ex}} = O(T^{2/3})$, the regret remains $\widetilde{\Theta}(T^{2/3})$, but when $B_{\mathrm{ex}} = \Omega(T^{2/3})$, it becomes $\widetilde{\Theta}(T/\sqrt{B_{\mathrm{ex}}})$, which improves as the budget $B_{\mathrm{ex}}$ increases. To design algorithms that can achieve the minimax regret, it is instructive to consider a more general setting where the learner has a budget of $B$ total observations. We fully characterize the minimax regret in this setting as well and show that it is $\widetilde{\Theta}(T/\sqrt{B})$, which scales smoothly with the total budget $B$. Furthermore, we propose a generic algorithmic framework, which enables us to design different learning algorithms that can achieve matching upper bounds for both settings based on the amount and type of feedback. One interesting finding is that while bandit feedback can still guarantee optimal regret when the budget is relatively limited, it no longer suffices to achieve optimal regret when the budget is relatively large. |
https://proceedings.mlr.press/v202/chiang23a.html | https://proceedings.mlr.press/v202/chiang23a/chiang23a.pdf | https://openreview.net/forum?id=XKcogevHj8 | Tighter Bounds on the Expressivity of Transformer Encoders | https://proceedings.mlr.press/v202/chiang23a.html | David Chiang, Peter Cholak, Anand Pillay | https://proceedings.mlr.press/v202/chiang23a.html | ICML 2023 | Characterizing neural networks in terms of better-understood formal systems has the potential to yield new insights into the power and limitations of these networks. Doing so for transformers remains an active area of research. Bhattamishra and others have shown that transformer encoders are at least as expressive as a certain kind of counter machine, while Merrill and Sabharwal have shown that fixed-precision transformer encoders recognize only languages in uniform $TC^0$. We connect and strengthen these results by identifying a variant of first-order logic with counting quantifiers that is simultaneously an upper bound for fixed-precision transformer encoders and a lower bound for transformer encoders. This brings us much closer than before to an exact characterization of the languages that transformer encoders recognize. |
https://proceedings.mlr.press/v202/chidambaram23a.html | https://proceedings.mlr.press/v202/chidambaram23a/chidambaram23a.pdf | https://openreview.net/forum?id=TXGh5DI3FP | Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup | https://proceedings.mlr.press/v202/chidambaram23a.html | Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge | https://proceedings.mlr.press/v202/chidambaram23a.html | ICML 2023 | Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels. In recent years, Mixup has become a standard primitive used in the training of state-of-the-art image classification models due to its demonstrated benefits over empirical risk minimization with regards to generalization and robustness. In this work, we try to explain some of this success from a feature learning perspective. We focus our attention on classification problems in which each class may have multiple associated features (or $\textit{views}$) that can be used to predict the class correctly. Our main theoretical results demonstrate that, for a non-trivial class of data distributions with two features per class, training a 2-layer convolutional network using empirical risk minimization can lead to learning only one feature for almost all classes while training with a specific instantiation of Mixup succeeds in learning both features for every class. We also show empirically that these theoretical insights extend to the practical settings of image benchmarks modified to have multiple features. |
https://proceedings.mlr.press/v202/chidambaram23b.html | https://proceedings.mlr.press/v202/chidambaram23b/chidambaram23b.pdf | https://openreview.net/forum?id=TZvDKSg6im | Hiding Data Helps: On the Benefits of Masking for Sparse Coding | https://proceedings.mlr.press/v202/chidambaram23b.html | Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge | https://proceedings.mlr.press/v202/chidambaram23b.html | ICML 2023 | Sparse coding, which refers to modeling a signal as sparse linear combinations of the elements of a learned dictionary, has proven to be a successful (and interpretable) approach in applications such as signal processing, computer vision, and medical imaging. While this success has spurred much work on provable guarantees for dictionary recovery when the learned dictionary is the same size as the ground-truth dictionary, work on the setting where the learned dictionary is larger (or $\textit{over-realized}$) with respect to the ground truth is comparatively nascent. Existing theoretical results in this setting have been constrained to the case of noise-less data. We show in this work that, in the presence of noise, minimizing the standard dictionary learning objective can fail to recover the elements of the ground-truth dictionary in the over-realized regime, regardless of the magnitude of the signal in the data-generating process. Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective for which recovering the ground-truth dictionary is in fact optimal as the signal increases for a large class of data-generating processes. We corroborate our theoretical results with experiments across several parameter regimes showing that our proposed objective also enjoys better empirical performance than the standard reconstruction objective. |
https://proceedings.mlr.press/v202/chien23a.html | https://proceedings.mlr.press/v202/chien23a/chien23a.pdf | https://openreview.net/forum?id=ltFbrFDbld | PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation | https://proceedings.mlr.press/v202/chien23a.html | Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, Hsiang-Fu Yu | https://proceedings.mlr.press/v202/chien23a.html | ICML 2023 | The eXtreme Multi-label Classification (XMC) problem seeks to find relevant labels from an exceptionally large label space. Most of the existing XMC learners focus on the extraction of semantic features from input query text. However, conventional XMC studies usually neglect the side information of instances and labels, which can be of use in many real-world applications such as recommendation systems and e-commerce product search. We propose Predicted Instance Neighborhood Aggregation (PINA), a data augmentation method for the general XMC problem that leverages beneficial side information. Unlike most existing XMC frameworks that treat labels and input instances as featureless indicators and independent entries, PINA extracts information from the label metadata and the correlations among training instances. Extensive experimental results demonstrate the consistent gain of PINA on various XMC tasks compared to the state-of-the-art methods: PINA offers a gain in accuracy compared to standard XR-Transformers on five public benchmark datasets. Moreover, PINA achieves a $\sim 5$% gain in accuracy on the largest dataset LF-AmazonTitles-1.3M. |
https://proceedings.mlr.press/v202/chiu23a.html | https://proceedings.mlr.press/v202/chiu23a/chiu23a.pdf | https://openreview.net/forum?id=SwWLzvsURq | Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations | https://proceedings.mlr.press/v202/chiu23a.html | Hong-Ming Chiu, Richard Y. Zhang | https://proceedings.mlr.press/v202/chiu23a.html | ICML 2023 | Adversarial training is well-known to produce high-quality neural network models that are empirically robust against adversarial perturbations. Nevertheless, once a model has been adversarially trained, one often desires a certification that the model is truly robust against all future attacks. Unfortunately, when faced with adversarially trained models, all existing approaches have significant trouble making certifications that are strong enough to be practically useful. Linear programming (LP) techniques in particular face a “convex relaxation barrier” that prevent them from making high-quality certifications, even after refinement with mixed-integer linear programming (MILP) and branch-and-bound (BnB) techniques. In this paper, we propose a nonconvex certification technique, based on a low-rank restriction of a semidefinite programming (SDP) relaxation. The nonconvex relaxation makes strong certifications comparable to much more expensive SDP methods, while optimizing over dramatically fewer variables comparable to much weaker LP methods. Despite nonconvexity, we show how off-the-shelf local optimization algorithms can be used to achieve and to certify global optimality in polynomial time. Our experiments find that the nonconvex relaxation almost completely closes the gap towards exact certification of adversarially trained models. |
https://proceedings.mlr.press/v202/cho23a.html | https://proceedings.mlr.press/v202/cho23a/cho23a.pdf | https://openreview.net/forum?id=S2hcTJB6fb | Neural Latent Aligner: Cross-trial Alignment for Learning Representations of Complex, Naturalistic Neural Data | https://proceedings.mlr.press/v202/cho23a.html | Cheol Jun Cho, Edward Chang, Gopala Anumanchipalli | https://proceedings.mlr.press/v202/cho23a.html | ICML 2023 | Understanding the neural implementation of complex human behaviors is one of the major goals in neuroscience. To this end, it is crucial to find a true representation of the neural data, which is challenging due to the high complexity of behaviors and the low signal-to-ratio (SNR) of the signals. Here, we propose a novel unsupervised learning framework, Neural Latent Aligner (NLA), to find well-constrained, behaviorally relevant neural representations of complex behaviors. The key idea is to align representations across repeated trials to learn cross-trial consistent information. Furthermore, we propose a novel, fully differentiable time warping model (TWM) to resolve the temporal misalignment of trials. When applied to intracranial electrocorticography (ECoG) of natural speaking, our model learns better representations for decoding behaviors than the baseline models, especially in lower dimensional space. The TWM is empirically validated by measuring behavioral coherence between aligned trials. The proposed framework learns more cross-trial consistent representations than the baselines, and when visualized, the manifold reveals shared neural trajectories across trials. |
https://proceedings.mlr.press/v202/cho23b.html | https://proceedings.mlr.press/v202/cho23b/cho23b.pdf | https://openreview.net/forum?id=d8LTNXt97w | On the Convergence of Federated Averaging with Cyclic Client Participation | https://proceedings.mlr.press/v202/cho23b.html | Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang | https://proceedings.mlr.press/v202/cho23b.html | ICML 2023 | Federated Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL). Previous convergence analyses of FedAvg either assume full client participation or partial client participation where the clients can be uniformly sampled. However, in practical cross-device FL systems, only a subset of clients that satisfy local criteria such as battery status, network connectivity, and maximum participation frequency requirements (to ensure privacy) are available for training at a given time. As a result, client availability follows a natural cyclic pattern. We provide (to our knowledge) the first theoretical framework to analyze the convergence of FedAvg with cyclic client participation with several different client optimizers such as GD, SGD, and shuffled SGD. Our analysis discovers that cyclic client participation can achieve a faster asymptotic convergence rate than vanilla FedAvg with uniform client participation under suitable conditions, providing valuable insights into the design of client sampling protocols. |
https://proceedings.mlr.press/v202/choi23a.html | https://proceedings.mlr.press/v202/choi23a/choi23a.pdf | https://openreview.net/forum?id=LMay53U4ke | GREAD: Graph Neural Reaction-Diffusion Networks | https://proceedings.mlr.press/v202/choi23a.html | Jeongwhan Choi, Seoyoung Hong, Noseong Park, Sung-Bae Cho | https://proceedings.mlr.press/v202/choi23a.html | ICML 2023 | Graph neural networks (GNNs) are one of the most popular research topics for deep learning. GNN methods typically have been designed on top of the graph signal processing theory. In particular, diffusion equations have been widely used for designing the core processing layer of GNNs, and therefore they are inevitably vulnerable to the notorious oversmoothing problem. Recently, a couple of papers paid attention to reaction equations in conjunctions with diffusion equations. However, they all consider limited forms of reaction equations. To this end, we present a reaction-diffusion equation-based GNN method that considers all popular types of reaction equations in addition to one special reaction equation designed by us. To our knowledge, our paper is one of the most comprehensive studies on reaction-diffusion equation-based GNNs. In our experiments with 9 datasets and 28 baselines, our method, called GREAD, outperforms them in a majority of cases. Further synthetic data experiments show that it mitigates the oversmoothing problem and works well for various homophily rates. |
https://proceedings.mlr.press/v202/choi23b.html | https://proceedings.mlr.press/v202/choi23b/choi23b.pdf | https://openreview.net/forum?id=JuNIuHLm9y | Is Overfitting Necessary for Implicit Video Representation? | https://proceedings.mlr.press/v202/choi23b.html | Hee Min Choi, Hyoa Kang, Dokwan Oh | https://proceedings.mlr.press/v202/choi23b.html | ICML 2023 | Compact representation of multimedia signals using implicit neural representations (INRs) has advanced significantly over the past few years, and recent works address their applications to video. Existing studies on video INR have focused on network architecture design as all video information is contained within network parameters. Here, we propose a new paradigm in efficient INR for videos based on the idea of strong lottery ticket (SLT) hypothesis (Zhou et al., 2019), which demonstrates the possibility of finding an accurate subnetwork mask, called supermask, for a randomly initialized classification network without weight training. Specifically, we train multiple supermasks with a hierarchical structure for a randomly initialized image-wise video representation model without weight updates. Different from a previous approach employing hierarchical supermasks (Okoshi et al., 2022), a trainable scale parameter for each mask is used instead of multiplying by the same fixed scale for all levels. This simple modification widens the parameter search space to sufficiently explore various sparsity patterns, leading the proposed algorithm to find stronger subnetworks. Moreover, extensive experiments on popular UVG benchmark show that random subnetworks obtained from our framework achieve higher reconstruction and visual quality than fully trained models with similar encoding sizes. Our study is the first to demonstrate the existence of SLTs in video INR models and propose an efficient method for finding them. |
https://proceedings.mlr.press/v202/choi23c.html | https://proceedings.mlr.press/v202/choi23c/choi23c.pdf | https://openreview.net/forum?id=wkr4r2Cw3i | Semi-Parametric Contextual Pricing Algorithm using Cox Proportional Hazards Model | https://proceedings.mlr.press/v202/choi23c.html | Young-Geun Choi, Gi-Soo Kim, Yunseo Choi, Wooseong Cho, Myunghee Cho Paik, Min-Hwan Oh | https://proceedings.mlr.press/v202/choi23c.html | ICML 2023 | Contextual dynamic pricing is a problem of setting prices based on current contextual information and previous sales history to maximize revenue. A popular approach is to postulate a distribution of customer valuation as a function of contextual information and the baseline valuation. A semi-parametric setting, where the context effect is parametric and the baseline is nonparametric, is of growing interest due to its flexibility. A challenge is that customer valuation is almost never observable in practice and is instead type-I interval censored by the offered price. To address this challenge, we propose a novel semi-parametric contextual pricing algorithm for stochastic contexts, called the epoch-based Cox proportional hazards Contextual Pricing (CoxCP) algorithm. To our best knowledge, our work is the first to employ the Cox model for customer valuation. The CoxCP algorithm has a high-probability regret upper bound of $\tilde{O}( T^{\frac{2}{3}}d )$, where $T$ is the length of horizon and $d$ is the dimension of context. In addition, if the baseline is known, the regret bound can improve to $O( d \log T )$ under certain assumptions. We demonstrate empirically the proposed algorithm performs better than existing semi-parametric contextual pricing algorithms when the model assumptions of all algorithms are correct. |
https://proceedings.mlr.press/v202/choi23d.html | https://proceedings.mlr.press/v202/choi23d/choi23d.pdf | https://openreview.net/forum?id=YR0TzWNzD8 | Restoration based Generative Models | https://proceedings.mlr.press/v202/choi23d.html | Jaemoo Choi, Yesom Park, Myungjoo Kang | https://proceedings.mlr.press/v202/choi23d.html | ICML 2023 | Denoising diffusion models (DDMs) have recently attracted increasing attention by showing impressive synthesis quality. DDMs are built on a diffusion process that pushes data to the noise distribution and the models learn to denoise. In this paper, we establish the interpretation of DDMs in terms of image restoration (IR). Integrating IR literature allows us to use an alternative objective and diverse forward processes, not confining to the diffusion process. By imposing prior knowledge on the loss function grounded on MAP-based estimation, we eliminate the need for the expensive sampling of DDMs. Also, we propose a multi-scale training, which improves the performance compared to the diffusion process, by taking advantage of the flexibility of the forward process. Experimental results demonstrate that our model improves the quality and efficiency of both training and inference. Furthermore, we show the applicability of our model to inverse problems. We believe that our framework paves the way for designing a new type of flexible general generative model. |
https://proceedings.mlr.press/v202/choi23e.html | https://proceedings.mlr.press/v202/choi23e/choi23e.pdf | https://openreview.net/forum?id=a33IYBCFey | Concept-based Explanations for Out-of-Distribution Detectors | https://proceedings.mlr.press/v202/choi23e.html | Jihye Choi, Jayaram Raghuram, Ryan Feng, Jiefeng Chen, Somesh Jha, Atul Prakash | https://proceedings.mlr.press/v202/choi23e.html | ICML 2023 | Out-of-distribution (OOD) detection plays a crucial role in ensuring the safe deployment of deep neural network (DNN) classifiers. While a myriad of methods have focused on improving the performance of OOD detectors, a critical gap remains in interpreting their decisions. We help bridge this gap by providing explanations for OOD detectors based on learned high-level concepts. We first propose two new metrics for assessing the effectiveness of a particular set of concepts for explaining OOD detectors: 1) detection completeness, which quantifies the sufficiency of concepts for explaining an OOD-detector’s decisions, and 2) concept separability, which captures the distributional separation between in-distribution and OOD data in the concept space. Based on these metrics, we propose an unsupervised framework for learning a set of concepts that satisfy the desired properties of high detection completeness and concept separability, and demonstrate its effectiveness in providing concept-based explanations for diverse off-the-shelf OOD detectors. We also show how to identify prominent concepts contributing to the detection results, and provide further reasoning about their decisions. |
https://proceedings.mlr.press/v202/choo23a.html | https://proceedings.mlr.press/v202/choo23a/choo23a.pdf | https://openreview.net/forum?id=u2Ap3vr5zQ | Active causal structure learning with advice | https://proceedings.mlr.press/v202/choo23a.html | Davin Choo, Themistoklis Gouleakis, Arnab Bhattacharyya | https://proceedings.mlr.press/v202/choo23a.html | ICML 2023 | We introduce the problem of active causal structure learning with advice. In the typical well-studied setting, the learning algorithm is given the essential graph for the observational distribution and is asked to recover the underlying causal directed acyclic graph (DAG) $G^*$ while minimizing the number of interventions made. In our setting, we are additionally given side information about $G^*$ as advice, e.g. a DAG $G$ purported to be $G^*$. We ask whether the learning algorithm can benefit from the advice when it is close to being correct, while still having worst-case guarantees even when the advice is arbitrarily bad. Our work is in the same space as the growing body of research on algorithms with predictions. When the advice is a DAG $G$, we design an adaptive search algorithm to recover $G^*$ whose intervention cost is at most $\mathcal{O}(\max\{1, \log \psi\})$ times the cost for verifying $G^*$; here, $\psi$ is a distance measure between $G$ and $G^*$ that is upper bounded by the number of variables $n$, and is exactly 0 when $G=G^*$. Our approximation factor matches the state-of-the-art for the advice-less setting. |
https://proceedings.mlr.press/v202/choo23b.html | https://proceedings.mlr.press/v202/choo23b/choo23b.pdf | https://openreview.net/forum?id=gnb9UUFqsc | New metrics and search algorithms for weighted causal DAGs | https://proceedings.mlr.press/v202/choo23b.html | Davin Choo, Kirankumar Shiragur | https://proceedings.mlr.press/v202/choo23b.html | ICML 2023 | Recovering causal relationships from data is an important problem. Using observational data, one can typically only recover causal graphs up to a Markov equivalence class and additional assumptions or interventional data are needed for complete recovery. In this work, under some standard assumptions, we study causal graph discovery via adaptive interventions with node-dependent interventional costs. For this setting, we show that no algorithm can achieve an approximation guarantee that is asymptotically better than linear in the number of vertices with respect to the verification number; a well-established benchmark for adaptive search algorithms. Motivated by this negative result, we define a new benchmark that captures the worst-case interventional cost for any search algorithm. Furthermore, with respect to this new benchmark, we provide adaptive search algorithms that achieve logarithmic approximations under various settings: atomic, bounded size interventions and generalized cost objectives. |
https://proceedings.mlr.press/v202/chopin23a.html | https://proceedings.mlr.press/v202/chopin23a/chopin23a.pdf | https://openreview.net/forum?id=HafOgQ1zW2 | Computational Doob h-transforms for Online Filtering of Discretely Observed Diffusions | https://proceedings.mlr.press/v202/chopin23a.html | Nicolas Chopin, Andras Fulop, Jeremy Heng, Alexandre H. Thiery | https://proceedings.mlr.press/v202/chopin23a.html | ICML 2023 | This paper is concerned with online filtering of discretely observed nonlinear diffusion processes. Our approach is based on the fully adapted auxiliary particle filter, which involves Doob’s $h$-transforms that are typically intractable. We propose a computational framework to approximate these $h$-transforms by solving the underlying backward Kolmogorov equations using nonlinear Feynman-Kac formulas and neural networks. The methodology allows one to train a locally optimal particle filter prior to the data-assimilation procedure. Numerical experiments illustrate that the proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters in the regime of highly informative observations, when the observations are extreme under the model, and if the state dimension is large. |
https://proceedings.mlr.press/v202/choquette-choo23a.html | https://proceedings.mlr.press/v202/choquette-choo23a/choquette-choo23a.pdf | https://openreview.net/forum?id=ZVxT2ToHR5 | Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning | https://proceedings.mlr.press/v202/choquette-choo23a.html | Christopher A. Choquette-Choo, Hugh Brendan Mcmahan, J Keith Rush, Abhradeep Guha Thakurta | https://proceedings.mlr.press/v202/choquette-choo23a.html | ICML 2023 | We introduce new differentially private (DP) mechanisms for gradient-based machine learning (ML) with multiple passes (epochs) over a dataset, substantially improving the achievable privacy-utility-computation tradeoffs. We formalize the problem of DP mechanisms for adaptive streams with multiple participations and introduce a non-trivial extension of online matrix factorization DP mechanisms to our setting. This includes establishing the necessary theory for sensitivity calculations and efficient computation of optimal matrices. For some applications like $>\!\! 10,000$ SGD steps, applying these optimal techniques becomes computationally expensive. We thus design an efficient Fourier-transform-based mechanism with only a minor utility loss. Extensive empirical evaluation on both example-level DP for image classification and user-level DP for language modeling demonstrate substantial improvements over all previous methods, including the widely-used DP-SGD. Though our primary application is to ML, our main DP results are applicable to arbitrary linear queries and hence may have much broader applicability. |
https://proceedings.mlr.press/v202/choromanski23a.html | https://proceedings.mlr.press/v202/choromanski23a/choromanski23a.pdf | https://openreview.net/forum?id=H21qm4xyk9 | Taming graph kernels with random features | https://proceedings.mlr.press/v202/choromanski23a.html | Krzysztof Marcin Choromanski | https://proceedings.mlr.press/v202/choromanski23a.html | ICML 2023 | We introduce in this paper the mechanism of graph random features (GRFs). GRFs can be used to construct unbiased randomized estimators of several important kernels defined on graphs’ nodes, in particular the regularized Laplacian kernel. As regular RFs for non-graph kernels, they provide means to scale up kernel methods defined on graphs to larger networks. Importantly, they give substantial computational gains also for smaller graphs, while applied in downstream applications. Consequently, GRFs address the notoriously difficult problem of cubic (in the number of the nodes of the graph) time complexity of graph kernels algorithms. We provide a detailed theoretical analysis of GRFs and an extensive empirical evaluation: from speed tests, through Frobenius relative error analysis to kmeans graph-clustering with graph kernels. We show that the computation of GRFs admits an embarrassingly simple distributed algorithm that can be applied if the graph under consideration needs to be split across several machines. We also introduce a (still unbiased) quasi Monte Carlo variant of GRFs, q-GRFs, relying on the so-called reinforced random walks that might be used to optimize the variance of GRFs. As a byproduct, we obtain a novel approach to solve certain classes of linear equations with positive and symmetric matrices. |
https://proceedings.mlr.press/v202/choromanski23b.html | https://proceedings.mlr.press/v202/choromanski23b/choromanski23b.pdf | https://openreview.net/forum?id=Y5jGkbZ0W3 | Efficient Graph Field Integrators Meet Point Clouds | https://proceedings.mlr.press/v202/choromanski23b.html | Krzysztof Marcin Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii Likhosherstov, Somnath Basu Roy Chowdhury, Kumar Avinava Dubey, Deepali Jain, Tamas Sarlos, Snigdha Chaturvedi, Adrian Weller | https://proceedings.mlr.press/v202/choromanski23b.html | ICML 2023 | We present two new classes of algorithms for efficient field integration on graphs encoding point cloud data. The first class, $\mathrm{SeparatorFactorization}$ (SF), leverages the bounded genus of point cloud mesh graphs, while the second class, $\mathrm{RFDiffusion}$ (RFD), uses popular $\epsilon$-nearest-neighbor graph representations for point clouds. Both can be viewed as providing the functionality of Fast Multipole Methods (FMMs), which have had a tremendous impact on efficient integration, but for non-Euclidean spaces. We focus on geometries induced by distributions of walk lengths between points (e.g. shortest-path distance). We provide an extensive theoretical analysis of our algorithms, obtaining new results in structural graph theory as a byproduct. We also perform exhaustive empirical evaluation, including on-surface interpolation for rigid and deformable objects (in particular for mesh-dynamics modeling) as well as Wasserstein distance computations for point clouds, including the Gromov-Wasserstein variant. |
https://proceedings.mlr.press/v202/choshen23a.html | https://proceedings.mlr.press/v202/choshen23a/choshen23a.pdf | https://openreview.net/forum?id=EHgAM1xnWv | ContraBAR: Contrastive Bayes-Adaptive Deep RL | https://proceedings.mlr.press/v202/choshen23a.html | Era Choshen, Aviv Tamar | https://proceedings.mlr.press/v202/choshen23a.html | ICML 2023 | In meta reinforcement learning (meta RL), an agent seeks a Bayes-optimal policy – the optimal policy when facing an unknown task that is sampled from some known task distribution. Previous approaches tackled this problem by inferring a $\textit{belief}$ over task parameters, using variational inference methods. Motivated by recent successes of contrastive learning approaches in RL, such as contrastive predictive coding (CPC), we investigate whether contrastive methods can be used for learning Bayes-optimal behavior. We begin by proving that representations learned by CPC are indeed sufficient for Bayes optimality. Based on this observation, we propose a simple meta RL algorithm that uses CPC in lieu of variational belief inference. Our method, $\textit{ContraBAR}$, achieves comparable performance to state-of-the-art in domains with state-based observation and circumvents the computational toll of future observation reconstruction, enabling learning in domains with image-based observations. It can also be combined with image augmentations for domain randomization and used seamlessly in both online and offline meta RL settings. |
https://proceedings.mlr.press/v202/chourasia23a.html | https://proceedings.mlr.press/v202/chourasia23a/chourasia23a.pdf | https://openreview.net/forum?id=aOU7OvlxeJ | Forget Unlearning: Towards True Data-Deletion in Machine Learning | https://proceedings.mlr.press/v202/chourasia23a.html | Rishav Chourasia, Neil Shah | https://proceedings.mlr.press/v202/chourasia23a.html | ICML 2023 | Unlearning algorithms aim to remove deleted data’s influence from trained models at a cost lower than full retraining. However, prior guarantees of unlearning in literature are flawed and don’t protect the privacy of deleted records. We show that when people delete their data as a function of published models, records in a database become interdependent. So, even retraining a fresh model after deletion of a record doesn’t ensure its privacy. Secondly, unlearning algorithms that cache partial computations to speed up the processing can leak deleted information over a series of releases, violating the privacy of deleted records in the long run. To address these, we propose a sound deletion guarantee and show that ensuring the privacy of existing records is necessary for the privacy of deleted records. Under this notion, we propose an optimal, computationally efficient, and sound machine unlearning algorithm based on noisy gradient descent. |
https://proceedings.mlr.press/v202/chowdhury23a.html | https://proceedings.mlr.press/v202/chowdhury23a/chowdhury23a.pdf | https://openreview.net/forum?id=kNzaZ0jbIg | Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks | https://proceedings.mlr.press/v202/chowdhury23a.html | Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen | https://proceedings.mlr.press/v202/chowdhury23a.html | ICML 2023 | In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction. The recently proposed patch-level routing in MoE (pMoE) divides each input into $n$ patches (or tokens) and sends $l$ patches ($l\ll n$) to each expert through prioritized routing. pMoE has demonstrated great empirical success in reducing training and inference costs while maintaining test accuracy. However, the theoretical explanation of pMoE and the general MoE remains elusive. Focusing on a supervised classification task using a mixture of two-layer convolutional neural networks (CNNs), we show for the first time that pMoE provably reduces the required number of training samples to achieve desirable generalization (referred to as the sample complexity) by a factor in the polynomial order of $n/l$, and outperforms its single-expert counterpart of the same or even larger capacity. The advantage results from the discriminative routing property, which is justified in both theory and practice that pMoE routers can filter label-irrelevant patches and route similar class-discriminative patches to the same expert. Our experimental results on MNIST, CIFAR-10, and CelebA support our theoretical findings on pMoE’s generalization and show that pMoE can avoid learning spurious correlations. |
https://proceedings.mlr.press/v202/chowers23a.html | https://proceedings.mlr.press/v202/chowers23a/chowers23a.pdf | https://openreview.net/forum?id=RJGad2VFYk | What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective | https://proceedings.mlr.press/v202/chowers23a.html | Rhea Chowers, Yair Weiss | https://proceedings.mlr.press/v202/chowers23a.html | ICML 2023 | It has previously been reported that the representation that is learned in the first layer of deep Convolutional Neural Networks (CNNs) is highly consistent across initializations and architectures. In this work, we quantify this consistency by considering the first layer as a filter bank and measuring its energy distribution. We find that the energy distribution is very different from that of the initial weights and is remarkably consistent across random initializations, datasets, architectures and even when the CNNs are trained with random labels. In order to explain this consistency, we derive an analytical formula for the energy profile of linear CNNs and show that this profile is mostly dictated by the second order statistics of image patches in the training set and it will approach a whitening transformation when the number of iterations goes to infinity. Finally, we show that this formula for linear CNNs also gives an excellent fit for the energy profiles learned by commonly used nonlinear CNNs such as ResNet and VGG, and that the first layer of these CNNs indeed performs approximate whitening of their inputs. |
https://proceedings.mlr.press/v202/christofidellis23a.html | https://proceedings.mlr.press/v202/christofidellis23a/christofidellis23a.pdf | https://openreview.net/forum?id=7TLjOO4cvm | Unifying Molecular and Textual Representations via Multi-task Language Modelling | https://proceedings.mlr.press/v202/christofidellis23a.html | Dimitrios Christofidellis, Giorgio Giannone, Jannis Born, Ole Winther, Teodoro Laino, Matteo Manica | https://proceedings.mlr.press/v202/christofidellis23a.html | ICML 2023 | The recent advances in neural language models have also been successfully applied to the field of chemistry, offering generative solutions for classical problems in molecular design and synthesis planning. These new methods have the potential to fuel a new era of data-driven automation in scientific discovery. However, specialized models are still typically required for each task, leading to the need for problem-specific fine-tuning and neglecting task interrelations. The main obstacle in this field is the lack of a unified representation between natural language and chemical representations, complicating and limiting human-machine interaction. Here, we propose the first multi-domain, multi-task language model that can solve a wide range of tasks in both the chemical and natural language domains. Our model can handle chemical and natural language concurrently, without requiring expensive pre-training on single domains or task-specific models. Interestingly, sharing weights across domains remarkably improves our model when benchmarked against state-of-the-art baselines on single-domain and cross-domain tasks. In particular, sharing information across domains and tasks gives rise to large improvements in cross-domain tasks, the magnitude of which increase with scale, as measured by more than a dozen of relevant metrics. Our work suggests that such models can robustly and efficiently accelerate discovery in physical sciences by superseding problem-specific fine-tuning and enhancing human-model interactions. |
https://proceedings.mlr.press/v202/chu23a.html | https://proceedings.mlr.press/v202/chu23a/chu23a.pdf | https://openreview.net/forum?id=Z1I4WrV5TG | Wasserstein Barycenter Matching for Graph Size Generalization of Message Passing Neural Networks | https://proceedings.mlr.press/v202/chu23a.html | Xu Chu, Yujie Jin, Xin Wang, Shanghang Zhang, Yasha Wang, Wenwu Zhu, Hong Mei | https://proceedings.mlr.press/v202/chu23a.html | ICML 2023 | Graph size generalization is hard for Message passing neural networks (MPNNs). The graph-level classification performance of MPNNs degrades across various graph sizes. Recently, theoretical studies reveal that a slow uncontrollable convergence rate w.r.t. graph size could adversely affect the size generalization. To address the uncontrollable convergence rate caused by correlations across nodes in the underlying dimensional signal-generating space, we propose to use Wasserstein barycenters as graph-level consensus to combat node-level correlations. Methodologically, we propose a Wasserstein barycenter matching (WBM) layer that represents an input graph by Wasserstein distances between its MPNN-filtered node embeddings versus some learned class-wise barycenters. Theoretically, we show that the convergence rate of an MPNN with a WBM layer is controllable and independent to the dimensionality of the signal-generating space. Thus MPNNs with WBM layers are less susceptible to slow uncontrollable convergence rate and size variations. Empirically, the WBM layer improves the size generalization over vanilla MPNNs with different backbones (e.g., GCN, GIN, and PNA) significantly on real-world graph datasets. |
https://proceedings.mlr.press/v202/chu23b.html | https://proceedings.mlr.press/v202/chu23b/chu23b.pdf | https://openreview.net/forum?id=IkSGn9fcPz | Shape-Guided Dual-Memory Learning for 3D Anomaly Detection | https://proceedings.mlr.press/v202/chu23b.html | Yu-Min Chu, Chieh Liu, Ting-I Hsieh, Hwann-Tzong Chen, Tyng-Luh Liu | https://proceedings.mlr.press/v202/chu23b.html | ICML 2023 | We present a shape-guided expert-learning framework to tackle the problem of unsupervised 3D anomaly detection. Our method is established on the effectiveness of two specialized expert models and their synergy to localize anomalous regions from color and shape modalities. The first expert utilizes geometric information to probe 3D structural anomalies by modeling the implicit distance fields around local shapes. The second expert considers the 2D RGB features associated with the first expert to identify color appearance irregularities on the local shapes. We use the two experts to build the dual memory banks from the anomaly-free training samples and perform shape-guided inference to pinpoint the defects in the testing samples. Owing to the per-point 3D representation and the effective fusion scheme of complementary modalities, our method efficiently achieves state-of-the-art performance on the MVTec 3D-AD dataset with better recall and lower false positive rates, as preferred in real applications. |
https://proceedings.mlr.press/v202/chu23c.html | https://proceedings.mlr.press/v202/chu23c/chu23c.pdf | https://openreview.net/forum?id=FQlsEvyQ4N | Multiply Robust Off-policy Evaluation and Learning under Truncation by Death | https://proceedings.mlr.press/v202/chu23c.html | Jianing Chu, Shu Yang, Wenbin Lu | https://proceedings.mlr.press/v202/chu23c.html | ICML 2023 | Typical off-policy evaluation (OPE) and off-policy learning (OPL) are not well-defined problems under "truncation by death", where the outcome of interest is not defined after some events, such as death. The standard OPE no longer yields consistent estimators, and the standard OPL results in suboptimal policies. In this paper, we formulate OPE and OPL using principal stratification under "truncation by death". We propose a survivor value function for a subpopulation whose outcomes are always defined regardless of treatment conditions. We establish a novel identification strategy under principal ignorability, and derive the semiparametric efficiency bound of an OPE estimator. Then, we propose multiply robust estimators for OPE and OPL. We show that the proposed estimators are consistent and asymptotically normal even with flexible semi/nonparametric models for nuisance functions approximation. Moreover, under mild rate conditions of nuisance functions approximation, the estimators achieve the semiparametric efficiency bound. Finally, we conduct experiments to demonstrate the empirical performance of the proposed estimators. |
https://proceedings.mlr.press/v202/chuang23a.html | https://proceedings.mlr.press/v202/chuang23a/chuang23a.pdf | https://openreview.net/forum?id=m2BVUzNzKJ | InfoOT: Information Maximizing Optimal Transport | https://proceedings.mlr.press/v202/chuang23a.html | Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis | https://proceedings.mlr.press/v202/chuang23a.html | ICML 2023 | Optimal transport aligns samples across distributions by minimizing the transportation cost between them, e.g., the geometric distances. Yet, it ignores coherence structure in the data such as clusters, does not handle outliers well, and cannot integrate new data points. To address these drawbacks, we propose InfoOT, an information-theoretic extension of optimal transport that maximizes the mutual information between domains while minimizing geometric distances. The resulting objective can still be formulated as a (generalized) optimal transport problem, and can be efficiently solved by projected gradient descent. This formulation yields a new projection method that is robust to outliers and generalizes to unseen samples. Empirically, InfoOT improves the quality of alignments across benchmarks in domain adaptation, cross-domain retrieval, and single-cell alignment. |
https://proceedings.mlr.press/v202/chughtai23a.html | https://proceedings.mlr.press/v202/chughtai23a/chughtai23a.pdf | https://openreview.net/forum?id=jCOrkuUpss | A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations | https://proceedings.mlr.press/v202/chughtai23a.html | Bilal Chughtai, Lawrence Chan, Neel Nanda | https://proceedings.mlr.press/v202/chughtai23a.html | ICML 2023 | Universality is a key hypothesis in mechanistic interpretability – that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small networks learn to implement group compositions. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that these networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks trained on various groups and architectures, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned – as well as the order they develop – are arbitrary. |
https://proceedings.mlr.press/v202/clarkson23a.html | https://proceedings.mlr.press/v202/clarkson23a/clarkson23a.pdf | https://openreview.net/forum?id=1e80ooimrm | Distribution Free Prediction Sets for Node Classification | https://proceedings.mlr.press/v202/clarkson23a.html | Jase Clarkson | https://proceedings.mlr.press/v202/clarkson23a.html | ICML 2023 | Graph Neural Networks (GNNs) are able to achieve high classification accuracy on many important real world datasets, but provide no rigorous notion of predictive uncertainty. Quantifying the confidence of GNN models is difficult due to the dependence between datapoints induced by the graph structure. We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios. We do this by taking an existing approach for conformal classification that relies on exchangeable data and modifying it by appropriately weighting the conformal scores to reflect the network structure. We show through experiments on standard benchmark datasets using popular GNN models that our approach provides tighter and better calibrated prediction sets than a naive application of conformal prediction. |
https://proceedings.mlr.press/v202/cohen23a.html | https://proceedings.mlr.press/v202/cohen23a/cohen23a.pdf | https://openreview.net/forum?id=uBuWtVGF3h | Sequential Strategic Screening | https://proceedings.mlr.press/v202/cohen23a.html | Lee Cohen, Saeed Sharifi -Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani | https://proceedings.mlr.press/v202/cohen23a.html | ICML 2023 | We initiate the study of strategic behavior in screening processes with multiple classifiers. We focus on two contrasting settings: a "conjunctive” setting in which an individual must satisfy all classifiers simultaneously, and a sequential setting in which an individual to succeed must satisfy classifiers one at a time. In other words, we introduce the combination of strategic classificationwith screening processes. We show that sequential screening pipelines exhibit new and surprising behavior where individuals can exploit the sequential ordering of the tests to "zig-zag” between classifiers without having to simultaneously satisfy all of them. We demonstrate an individual can obtain a positive outcome using a limited manipulation budget even when far from the intersection of the positive regions of every classifier. Finally, we consider a learner whose goal is to design a sequential screening process that is robust to such manipulations, and provide a construction for the learner that optimizes a natural objective. |
https://proceedings.mlr.press/v202/cohen23b.html | https://proceedings.mlr.press/v202/cohen23b/cohen23b.pdf | https://openreview.net/forum?id=0lufU7dRWA | Few-Sample Feature Selection via Feature Manifold Learning | https://proceedings.mlr.press/v202/cohen23b.html | David Cohen, Tal Shnitzer, Yuval Kluger, Ronen Talmon | https://proceedings.mlr.press/v202/cohen23b.html | ICML 2023 | In this paper, we present a new method for few-sample supervised feature selection (FS). Our method first learns the manifold of the feature space of each class using kernels capturing multi-feature associations. Then, based on Riemannian geometry, a composite kernel is computed, extracting the differences between the learned feature associations. Finally, a FS score based on spectral analysis is proposed. Considering multi-feature associations makes our method multivariate by design. This in turn allows for the extraction of the hidden manifold underlying the features and avoids overfitting, facilitating few-sample FS. We showcase the efficacy of our method on illustrative examples and several benchmarks, where our method demonstrates higher accuracy in selecting the informative features compared to competing methods. In addition, we show that our FS leads to improved classification and better generalization when applied to test data. |
https://proceedings.mlr.press/v202/cole23a.html | https://proceedings.mlr.press/v202/cole23a/cole23a.pdf | https://openreview.net/forum?id=C6IDiP5Or9 | Spatial Implicit Neural Representations for Global-Scale Species Mapping | https://proceedings.mlr.press/v202/cole23a.html | Elijah Cole, Grant Van Horn, Christian Lange, Alexander Shepard, Patrick Leary, Pietro Perona, Scott Loarie, Oisin Mac Aodha | https://proceedings.mlr.press/v202/cole23a.html | ICML 2023 | Estimating the geographical range of a species from sparse observations is a challenging and important geospatial prediction problem. Given a set of locations where a species has been observed, the goal is to build a model to predict whether the species is present or absent at any location. This problem has a long history in ecology, but traditional methods struggle to take advantage of emerging large-scale crowdsourced datasets which can include tens of millions of records for hundreds of thousands of species. In this work, we use Spatial Implicit Neural Representations (SINRs) to jointly estimate the geographical range of 47k species simultaneously. We find that our approach scales gracefully, making increasingly better predictions as we increase the number of species and the amount of data per species when training. To make this problem accessible to machine learning researchers, we provide four new benchmarks that measure different aspects of species range estimation and spatial representation learning. Using these benchmarks, we demonstrate that noisy and biased crowdsourced data can be combined with implicit neural representations to approximate expert-developed range maps for many species. |
https://proceedings.mlr.press/v202/coletta23a.html | https://proceedings.mlr.press/v202/coletta23a/coletta23a.pdf | https://openreview.net/forum?id=1s3P1SjAsF | K-SHAP: Policy Clustering Algorithm for Anonymous Multi-Agent State-Action Pairs | https://proceedings.mlr.press/v202/coletta23a.html | Andrea Coletta, Svitlana Vyetrenko, Tucker Balch | https://proceedings.mlr.press/v202/coletta23a.html | ICML 2023 | Learning agent behaviors from observational data has shown to improve our understanding of their decision-making processes, advancing our ability to explain their interactions with the environment and other agents. While multiple learning techniques have been proposed in the literature, there is one particular setting that has not been explored yet: multi agent systems where agent identities remain anonymous. For instance, in financial markets labeled data that identifies market participant strategies is typically proprietary, and only the anonymous state-action pairs that result from the interaction of multiple market participants are publicly available. As a result, sequences of agent actions are not observable, restricting the applicability of existing work. In this paper, we propose a Policy Clustering algorithm, called K-SHAP, that learns to group anonymous state-action pairs according to the agent policies. We frame the problem as an Imitation Learning (IL) task, and we learn a world-policy able to mimic all the agent behaviors upon different environmental states. We leverage the world-policy to explain each anonymous observation through an additive feature attribution method called SHAP (SHapley Additive exPlanations). Finally, by clustering the explanations we show that we are able to identify different agent policies and group observations accordingly. We evaluate our approach on simulated synthetic market data and a real-world financial dataset. We show that our proposal significantly and consistently outperforms the existing methods, identifying different agent strategies. |
https://proceedings.mlr.press/v202/comas23a.html | https://proceedings.mlr.press/v202/comas23a/comas23a.pdf | https://openreview.net/forum?id=Iwt7oI9cNb | Inferring Relational Potentials in Interacting Systems | https://proceedings.mlr.press/v202/comas23a.html | Armand Comas, Yilun Du, Christian Fernandez Lopez, Sandesh Ghimire, Mario Sznaier, Joshua B. Tenenbaum, Octavia Camps | https://proceedings.mlr.press/v202/comas23a.html | ICML 2023 | Systems consisting of interacting agents are prevalent in the world, ranging from dynamical systems in physics to complex biological networks. To build systems which can interact robustly in the real world, it is thus important to be able to infer the precise interactions governing such systems. Existing approaches typically discover such interactions by explicitly modeling the feed-forward dynamics of the trajectories. In this work, we propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions that enables greater flexibility in trajectory modeling: it discovers a set of relational potentials, represented as energy functions, which when minimized reconstruct the original trajectory. NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed. We illustrate that with these representations NIIP displays unique capabilities in test-time. First, it allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting. Additionally, it allows adding external hand-crafted potentials at test-time. Finally, NIIP enables the detection of out-of-distribution samples and anomalies without explicit training. |
https://proceedings.mlr.press/v202/connolly23a.html | https://proceedings.mlr.press/v202/connolly23a/connolly23a.pdf | https://openreview.net/forum?id=jawDXfCldp | Task-specific experimental design for treatment effect estimation | https://proceedings.mlr.press/v202/connolly23a.html | Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, Christopher Frye | https://proceedings.mlr.press/v202/connolly23a.html | ICML 2023 | Understanding causality should be a core requirement of any attempt to build real impact through AI. Due to the inherent unobservability of counterfactuals, large randomised trials (RCTs) are the standard for causal inference. But large experiments are generically expensive, and randomisation carries its own costs, e.g. when suboptimal decisions are trialed. Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought. In this work, we develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications. Across a range of important tasks, real-world datasets, and sample sizes, our method outperforms other benchmarks, e.g. requiring an order-of-magnitude less data to match RCT performance on targeted marketing tasks. |
https://proceedings.mlr.press/v202/cornacchia23a.html | https://proceedings.mlr.press/v202/cornacchia23a/cornacchia23a.pdf | https://openreview.net/forum?id=EgRfH4jeTL | A Mathematical Model for Curriculum Learning for Parities | https://proceedings.mlr.press/v202/cornacchia23a.html | Elisabetta Cornacchia, Elchanan Mossel | https://proceedings.mlr.press/v202/cornacchia23a.html | ICML 2023 | Curriculum learning (CL)- training using samples that are generated and presented in a meaningful order - was introduced in the machine learning context around a decade ago. While CL has been extensively used and analysed empirically, there has been very little mathematical justification for its advantages. We introduce a CL model for learning the class of k-parities on d bits of a binary string with a neural network trained by stochastic gradient descent (SGD). We show that a wise choice of training examples, involving two or more product distributions, allows to reduce significantly the computational cost of learning this class of functions, compared to learning under the uniform distribution. We conduct experiments to support our analysis. Furthermore, we show that for another class of functions - namely the ‘Hamming mixtures’ - CL strategies involving a bounded number of product distributions are not beneficial. |
https://proceedings.mlr.press/v202/covert23a.html | https://proceedings.mlr.press/v202/covert23a/covert23a.pdf | https://openreview.net/forum?id=dOaCuOsdmb | Learning to Maximize Mutual Information for Dynamic Feature Selection | https://proceedings.mlr.press/v202/covert23a.html | Ian Connick Covert, Wei Qiu, Mingyu Lu, Na Yoon Kim, Nathan J White, Su-In Lee | https://proceedings.mlr.press/v202/covert23a.html | ICML 2023 | Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning, but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality, and it outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem. |
https://proceedings.mlr.press/v202/cui23a.html | https://proceedings.mlr.press/v202/cui23a/cui23a.pdf | https://openreview.net/forum?id=3Loamzk5Fm | Rethinking Weak Supervision in Helping Contrastive Learning | https://proceedings.mlr.press/v202/cui23a.html | Jingyi Cui, Weiran Huang, Yifei Wang, Yisen Wang | https://proceedings.mlr.press/v202/cui23a.html | ICML 2023 | Contrastive learning has shown outstanding performances in both supervised and unsupervised learning, and has recently been introduced to solve weakly supervised learning problems such as semi-supervised learning and noisy label learning. Despite the empirical evidence showing that semi-supervised labels improve the representations of contrastive learning, it remains unknown if noisy supervised information can be directly used in training instead of after manual denoising. Therefore, to explore the mechanical differences between semi-supervised and noisy-labeled information in helping contrastive learning, we establish a unified theoretical framework of contrastive learning under weak supervision. Specifically, we investigate the most intuitive paradigm of jointly training supervised and unsupervised contrastive losses. By translating the weakly supervised information into a similarity graph under the framework of spectral clustering based on the posterior probability of weak labels, we establish the downstream classification error bound. We prove that semi-supervised labels improve the downstream error bound whereas noisy labels have limited effects under such a paradigm. Our theoretical findings here provide new insights for the community to rethink the role of weak supervision in helping contrastive learning. |
https://proceedings.mlr.press/v202/cui23b.html | https://proceedings.mlr.press/v202/cui23b/cui23b.pdf | https://openreview.net/forum?id=CXkJh2ITml | Bayes-optimal Learning of Deep Random Networks of Extensive-width | https://proceedings.mlr.press/v202/cui23b.html | Hugo Cui, Florent Krzakala, Lenka Zdeborova | https://proceedings.mlr.press/v202/cui23b.html | ICML 2023 | We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width are proportionally large and propose a closed-form expression for the Bayes-optimal test error, for regression and classification tasks. We further compute closed-form expressions for the test errors of ridge regression, kernel and random features regression. We find, in particular, that optimally regularized ridge regression, as well as kernel regression, achieve Bayes-optimal performances, while the logistic loss yields a near-optimal test error for classification. We further show numerically that when the number of samples grows faster than the dimension, ridge and kernel methods become suboptimal, while neural networks achieve test error close to zero from quadratically many samples. |
https://proceedings.mlr.press/v202/cui23c.html | https://proceedings.mlr.press/v202/cui23c/cui23c.pdf | https://openreview.net/forum?id=2rNiCN94NY | A General Representation Learning Framework with Generalization Performance Guarantees | https://proceedings.mlr.press/v202/cui23c.html | Junbiao Cui, Jianqing Liang, Qin Yue, Jiye Liang | https://proceedings.mlr.press/v202/cui23c.html | ICML 2023 | The generalization performance of machine learning methods depends heavily on the quality of data representation. However, existing researches rarely consider representation learning from the perspective of generalization error. In this paper, we prove that generalization error of representation learning function can be estimated effectively by solving two convex optimization problems. Based on it, we propose a general representation learning framework. And then, we apply the proposed framework to two most commonly used nonlinear mapping methods, i.e., kernel based method and deep neural network (DNN), and thus design a kernel selection method and a DNN boosting framework, correspondingly. Finally, extensive experiments verify the effectiveness of the proposed methods. |
https://proceedings.mlr.press/v202/cui23d.html | https://proceedings.mlr.press/v202/cui23d/cui23d.pdf | https://openreview.net/forum?id=MZkbgahv4a | IRNeXt: Rethinking Convolutional Network Design for Image Restoration | https://proceedings.mlr.press/v202/cui23d.html | Yuning Cui, Wenqi Ren, Sining Yang, Xiaochun Cao, Alois Knoll | https://proceedings.mlr.press/v202/cui23d.html | ICML 2023 | We present IRNeXt, a simple yet effective convolutional network architecture for image restoration. Recently, Transformer models have dominated the field of image restoration due to the powerful ability of modeling long-range pixels interactions. In this paper, we excavate the potential of the convolutional neural network (CNN) and show that our CNN-based model can receive comparable or better performance than Transformer models with low computation overhead on several image restoration tasks. By re-examining the characteristics possessed by advanced image restoration algorithms, we discover several key factors leading to the performance improvement of restoration models. This motivates us to develop a novel network for image restoration based on cheap convolution operators. Comprehensive experiments demonstrate that IRNeXt delivers state-of-the-art performance among numerous datasets on a range of image restoration tasks with low computational complexity, including image dehazing, single-image defocus/motion deblurring, image deraining, and image desnowing. https://github.com/c-yn/IRNeXt. |
https://proceedings.mlr.press/v202/cui23e.html | https://proceedings.mlr.press/v202/cui23e/cui23e.pdf | https://openreview.net/forum?id=ccwSdYv1GI | Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory | https://proceedings.mlr.press/v202/cui23e.html | Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh | https://proceedings.mlr.press/v202/cui23e.html | ICML 2023 | Dataset Distillation is a newly emerging area that aims to distill large datasets into much smaller and highly informative synthetic ones to accelerate training and reduce storage. Among various dataset distillation methods, trajectory-matching-based methods (MTT) have achieved SOTA performance in many tasks, e.g., on CIFAR-10/100. However, due to exorbitant memory consumption when unrolling optimization through SGD steps, MTT fails to scale to large-scale datasets such as ImageNet-1K. Can we scale this SOTA method to ImageNet-1K and does its effectiveness on CIFAR transfer to ImageNet-1K? To answer these questions, we first propose a procedure to exactly compute the unrolled gradient with constant memory complexity, which allows us to scale MTT to ImageNet-1K seamlessly with $\sim 6$x reduction in memory footprint. We further discover that it is challenging for MTT to handle datasets with a large number of classes, and propose a novel soft label assignment that drastically improves its convergence. The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5.9% accuracy drop against full dataset training) while utilizing only 4.2% of the number of data points - an 18.2% absolute gain over prior SOTA. |
https://proceedings.mlr.press/v202/cui23f.html | https://proceedings.mlr.press/v202/cui23f/cui23f.pdf | https://openreview.net/forum?id=SAgKrtDkn1 | Learning Dynamic Query Combinations for Transformer-based Object Detection and Segmentation | https://proceedings.mlr.press/v202/cui23f.html | Yiming Cui, Linjie Yang, Haichao Yu | https://proceedings.mlr.press/v202/cui23f.html | ICML 2023 | Transformer-based detection and segmentation methods use a list of learned detection queries to retrieve information from the transformer network and learn to predict the location and category of one specific object from each query. We empirically find that random convex combinations of the learned queries are still good for the corresponding models. We then propose to learn a convex combination with dynamic coefficients based on the high-level semantics of the image. The generated dynamic queries, named as modulated queries, better capture the prior of object locations and categories in the different images. Equipped with our modulated queries, a wide range of DETR-based models achieve consistent and superior performance across multiple tasks (object detection, instance segmentation, panoptic segmentation) and on different benchmarks (MS COCO, CityScapes, YoutubeVIS). |
https://proceedings.mlr.press/v202/curth23a.html | https://proceedings.mlr.press/v202/curth23a/curth23a.pdf | https://openreview.net/forum?id=BGv7lLQVWk | Adaptive Identification of Populations with Treatment Benefit in Clinical Trials: Machine Learning Challenges and Solutions | https://proceedings.mlr.press/v202/curth23a.html | Alicia Curth, Alihan Hüyük, Mihaela Van Der Schaar | https://proceedings.mlr.press/v202/curth23a.html | ICML 2023 | We study the problem of adaptively identifying patient subpopulations that benefit from a given treatment during a confirmatory clinical trial. This type of adaptive clinical trial has been thoroughly studied in biostatistics, but has been allowed only limited adaptivity so far. Here, we aim to relax classical restrictions on such designs and investigate how to incorporate ideas from the recent machine learning literature on adaptive and online experimentation to make trials more flexible and efficient. We find that the unique characteristics of the subpopulation selection problem – most importantly that (i) one is usually interested in finding subpopulations with any treatment benefit (and not necessarily the single subgroup with largest effect) given a limited budget and that (ii) effectiveness only has to be demonstrated across the subpopulation on average – give rise to interesting challenges and new desiderata when designing algorithmic solutions. Building on these findings, we propose AdaGGI and AdaGCPI, two meta-algorithms for subpopulation construction. We empirically investigate their performance across a range of simulation scenarios and derive insights into their (dis)advantages across different settings. |
https://proceedings.mlr.press/v202/curth23b.html | https://proceedings.mlr.press/v202/curth23b/curth23b.pdf | https://openreview.net/forum?id=fOSihVI1FW | In Search of Insights, Not Magic Bullets: Towards Demystification of the Model Selection Dilemma in Heterogeneous Treatment Effect Estimation | https://proceedings.mlr.press/v202/curth23b.html | Alicia Curth, Mihaela Van Der Schaar | https://proceedings.mlr.press/v202/curth23b.html | ICML 2023 | Personalized treatment effect estimates are often of interest in high-stakes applications – thus, before deploying a model estimating such effects in practice, one needs to be sure that the best candidate from the ever-growing machine learning toolbox for this task was chosen. Unfortunately, due to the absence of counterfactual information in practice, it is usually not possible to rely on standard validation metrics for doing so, leading to a well-known model selection dilemma in the treatment effect estimation literature. While some solutions have recently been investigated, systematic understanding of the strengths and weaknesses of different model selection criteria is still lacking. In this paper, instead of attempting to declare a global ‘winner’, we therefore empirically investigate success- and failure modes of different selection criteria. We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them, and provide interesting insights into the relative (dis)advantages of different criteria alongside desiderata for the design of further illuminating empirical studies in this context. |
https://proceedings.mlr.press/v202/cutkosky23a.html | https://proceedings.mlr.press/v202/cutkosky23a/cutkosky23a.pdf | https://openreview.net/forum?id=GimajxXNc0 | Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion | https://proceedings.mlr.press/v202/cutkosky23a.html | Ashok Cutkosky, Harsh Mehta, Francesco Orabona | https://proceedings.mlr.press/v202/cutkosky23a.html | ICML 2023 | We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a $(\delta,\epsilon)$-stationary point from $O(\epsilon^{-4}\delta^{-1})$ stochastic gradient queries to $O(\epsilon^{-3}\delta^{-1})$, which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to online learning, after which our results follow from standard regret bounds in online learning. For deterministic and second-order smooth objectives, applying more advanced optimistic online learning techniques enables a new complexity of $O(\epsilon^{-1.5}\delta^{-0.5})$. Our improved non-smooth analysis also immediately recovers all optimal or best-known results for finding $\epsilon$ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings. |
https://proceedings.mlr.press/v202/cuturi23a.html | https://proceedings.mlr.press/v202/cuturi23a/cuturi23a.pdf | https://openreview.net/forum?id=KnvZKvOaJ7 | Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps | https://proceedings.mlr.press/v202/cuturi23a.html | Marco Cuturi, Michal Klein, Pierre Ablin | https://proceedings.mlr.press/v202/cuturi23a.html | ICML 2023 | Optimal transport (OT) theory focuses, among all maps $T:\mathbb{R}^d\rightarrow \mathbb{R}^d$ that can morph a probability measure $\mu$ onto another $\nu$, on those that are the “thriftiest”, i.e. such that the average cost $c(x, T(x))$ between $x$ and its image $T(x)$ is as small as possible. Many computational approaches have been proposed to estimate such Monge maps when $c$ is the squared-Euclidean distance, e.g., using entropic maps [Pooladian+2021], or input convex neural networks [Makkuva+2020, Korotin+2020]. We propose a new research direction, that leverages a specific translation invariant cost $c(x, y):=h(x-y)$ inspired by the elastic net. Here, $h:=\tfrac{1}{2}\|\cdot\|_2^2+\tau(\cdot)$, where $\tau$ is a convex function. We highlight a surprising link tying together a generalized entropic map for $h$, Bregman centroids induced by $h$, and the proximal operator of $\tau$. We show how setting $\tau$ to be a sparsity-inducing norm results in the first application of Occam’s razor to transport. These maps yield, mechanically, displacement vectors $\Delta(x):= T(x)-x$ that are sparse, with sparsity patterns that vary depending on $x$. We showcase the ability of our method to estimate meaningful OT maps for high-dimensional single-cell transcription data. We use our methods in the $34000$-d space of gene counts for cells, without using a prior dimensionality reduction, thus retaining the ability to interpret all displacements at the gene level. |
https://proceedings.mlr.press/v202/cyffers23a.html | https://proceedings.mlr.press/v202/cyffers23a/cyffers23a.pdf | https://openreview.net/forum?id=CBLDv6SFMn | From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning | https://proceedings.mlr.press/v202/cyffers23a.html | Edwige Cyffers, Aurélien Bellet, Debabrota Basu | https://proceedings.mlr.press/v202/cyffers23a.html | ICML 2023 | We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to design and analyze new private optimization algorithms in a flexible manner. Focusing on the widely-used Alternating Directions Method of Multipliers (ADMM) method, we use our general framework derive novel private ADMM algorithms for centralized, federated and fully decentralized learning. We establish strong privacy guarantees for these algorithms, leveraging privacy amplification by iteration and by subsampling. Finally, we provide utility guarantees for the three algorithms using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations. |
https://proceedings.mlr.press/v202/dai23a.html | https://proceedings.mlr.press/v202/dai23a/dai23a.pdf | https://openreview.net/forum?id=HtHFnHrZXu | Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning | https://proceedings.mlr.press/v202/dai23a.html | Yanbo Dai, Songze Li | https://proceedings.mlr.press/v202/dai23a.html | ICML 2023 | In a federated learning (FL) system, distributed clients upload their local models to a central server to aggregate into a global model. Malicious clients may plant backdoors into the global model through uploading poisoned local models, causing images with specific patterns to be misclassified into some target labels. Backdoors planted by current attacks are not durable, and vanish quickly once the attackers stop model poisoning. In this paper, we investigate the connection between the durability of FL backdoors and the relationships between benign images and poisoned images (i.e., the images whose labels are flipped to the target label during local training). Specifically, benign images with the original and the target labels of the poisoned images are found to have key effects on backdoor durability. Consequently, we propose a novel attack, Chameleon, which utilizes contrastive learning to further amplify such effects towards a more durable backdoor. Extensive experiments demonstrate that Chameleon significantly extends the backdoor lifespan over baselines by $1.2\times \sim 4\times$, for a wide range of image datasets, backdoor types, and model architectures. |
https://proceedings.mlr.press/v202/dai23b.html | https://proceedings.mlr.press/v202/dai23b/dai23b.pdf | https://openreview.net/forum?id=7WdMBofQFx | Refined Regret for Adversarial MDPs with Linear Function Approximation | https://proceedings.mlr.press/v202/dai23b.html | Yan Dai, Haipeng Luo, Chen-Yu Wei, Julian Zimmert | https://proceedings.mlr.press/v202/dai23b.html | ICML 2023 | We consider learning in an adversarial Markov Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes and the state space can be arbitrarily large. We assume that the Q-function of any policy is linear in some known features, that is, a linear function approximation exists. The best existing regret upper bound for this setting (Luo et al., 2021) is of order $\tilde{\mathcal O}(K^{2/3})$ (omitting all other dependencies), given access to a simulator. This paper provides two algorithms that improve the regret to $\tilde{\mathcal O}(\sqrt K)$ in the same setting. Our first algorithm makes use of a refined analysis of the Follow-the-Regularized-Leader (FTRL) algorithm with the log-barrier regularizer. This analysis allows the loss estimators to be arbitrarily negative and might be of independent interest. Our second algorithm develops a magnitude-reduced loss estimator, further removing the polynomial dependency on the number of actions in the first algorithm and leading to the optimal regret bound (up to logarithmic terms and dependency on the horizon). Moreover, we also extend the first algorithm to simulator-free linear MDPs, which achieves $\tilde{\mathcal O}(K^{8/9})$ regret and greatly improves over the best existing bound $\tilde{\mathcal O}(K^{14/15})$. This algorithm relies on a better alternative to the Matrix Geometric Resampling procedure by Neu & Olkhovskaya (2020), which could again be of independent interest. |
https://proceedings.mlr.press/v202/dai23c.html | https://proceedings.mlr.press/v202/dai23c/dai23c.pdf | https://openreview.net/forum?id=xdCQbljiLI | MultiRobustBench: Benchmarking Robustness Against Multiple Attacks | https://proceedings.mlr.press/v202/dai23c.html | Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal | https://proceedings.mlr.press/v202/dai23c.html | ICML 2023 | The bulk of existing research in defending against adversarial examples focuses on defending against a single (typically bounded $\ell_p$-norm) attack, but for a practical setting, machine learning (ML) models should be robust to a wide variety of attacks. In this paper, we present the first unified framework for considering multiple attacks against ML models. Our framework is able to model different levels of learner’s knowledge about the test-time adversary, allowing us to model robustness against unforeseen attacks and robustness against unions of attacks. Using our framework, we present the first leaderboard, MultiRobustBench (https://multirobustbench.github.io), for benchmarking multiattack evaluation which captures performance across attack types and attack strengths. We evaluate the performance of 16 defended models for robustness against a set of 9 different attack types, including $\ell_p$-based threat models, spatial transformations, and color changes, at 20 different attack strengths (180 attacks total). Additionally, we analyze the state of current defenses against multiple attacks. Our analysis shows that while existing defenses have made progress in terms of average robustness across the set of attacks used, robustness against the worst-case attack is still a big open problem as all existing models perform worse than random guessing. |
https://proceedings.mlr.press/v202/dai23d.html | https://proceedings.mlr.press/v202/dai23d/dai23d.pdf | https://openreview.net/forum?id=fX5I7lGLuG | Moderately Distributional Exploration for Domain Generalization | https://proceedings.mlr.press/v202/dai23d.html | Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian | https://proceedings.mlr.press/v202/dai23d.html | ICML 2023 | Domain generalization (DG) aims to tackle the distribution shift between training domains and unknown target domains. Generating new domains is one of the most effective approaches, yet its performance gain depends on the distribution discrepancy between the generated and target domains. Distributionally robust optimization is promising to tackle distribution discrepancy by exploring domains in an uncertainty set. However, the uncertainty set may be overwhelmingly large, leading to low-confidence prediction in DG. It is because a large uncertainty set could introduce domains containing semantically different factors from training domains. To address this issue, we propose to perform a $\textit{mo}$derately $\textit{d}$istributional $\textit{e}$xploration (MODE) for domain generalization. Specifically, MODE performs distribution exploration in an uncertainty $\textit{subset}$ that shares the same semantic factors with the training domains. We show that MODE can endow models with provable generalization performance on unknown target domains. The experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines. |
https://proceedings.mlr.press/v202/daley23a.html | https://proceedings.mlr.press/v202/daley23a/daley23a.pdf | https://openreview.net/forum?id=8Lww9LXokZ | Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning | https://proceedings.mlr.press/v202/daley23a.html | Brett Daley, Martha White, Christopher Amato, Marlos C. Machado | https://proceedings.mlr.press/v202/daley23a.html | ICML 2023 | Off-policy learning from multistep returns is crucial for sample-efficient reinforcement learning, but counteracting off-policy bias without exacerbating variance is challenging. Classically, off-policy bias is corrected in a per-decision manner: past temporal-difference errors are re-weighted by the instantaneous Importance Sampling (IS) ratio after each action via eligibility traces. Many off-policy algorithms rely on this mechanism, along with differing protocols for cutting the IS ratios (traces) to combat the variance of the IS estimator. Unfortunately, once a trace has been cut, the effect cannot be easily reversed. This has led to the development of credit-assignment strategies that account for multiple past experiences at a time. These trajectory-aware methods have not been extensively analyzed, and their theoretical justification remains uncertain. In this paper, we propose a multistep operator that unifies per-decision and trajectory-aware methods. We prove convergence conditions for our operator in the tabular setting, establishing the first guarantees for several existing methods as well as many new ones. Finally, we introduce Recency-Bounded Importance Sampling (RBIS), which leverages trajectory awareness to perform robustly across $\lambda$-values in an off-policy control task. |
https://proceedings.mlr.press/v202/daneshmand23a.html | https://proceedings.mlr.press/v202/daneshmand23a/daneshmand23a.pdf | https://openreview.net/forum?id=7snQRkYh6I | Efficient displacement convex optimization with particle gradient descent | https://proceedings.mlr.press/v202/daneshmand23a.html | Hadi Daneshmand, Jason D. Lee, Chi Jin | https://proceedings.mlr.press/v202/daneshmand23a.html | ICML 2023 | Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are displacement convex in measures. Concretely, for Lipschitz displacement convex functions defined on probability over $R^d$, we prove that $O(1/\epsilon^2)$ particles and $O(d/\epsilon^4)$ iterations are sufficient to find the $\epsilon$-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. An application of our results proves the conjecture of no optimization-barrier up to permutation invariance, proposed by Entezari et al. (2022), for specific two-layer neural networks with two-dimensional inputs uniformly drawn from unit circle. |
https://proceedings.mlr.press/v202/dang23a.html | https://proceedings.mlr.press/v202/dang23a/dang23a.pdf | https://openreview.net/forum?id=OWROxDcS10 | Multiple Thinking Achieving Meta-Ability Decoupling for Object Navigation | https://proceedings.mlr.press/v202/dang23a.html | Ronghao Dang, Lu Chen, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen | https://proceedings.mlr.press/v202/dang23a.html | ICML 2023 | We propose a meta-ability decoupling (MAD) paradigm, which brings together various object navigation methods in an architecture system, allowing them to mutually enhance each other and evolve together. Based on the MAD paradigm, we design a multiple thinking (MT) model that leverages distinct thinking to abstract various meta-abilities. Our method decouples meta-abilities from three aspects: input, encoding, and reward while employing the multiple thinking collaboration (MTC) module to promote mutual cooperation between thinking. MAD introduces a novel qualitative and quantitative interpretability system for object navigation. Through extensive experiments on AI2-Thor and RoboTHOR, we demonstrate that our method outperforms state-of-the-art (SOTA) methods on both typical and zero-shot object navigation tasks. |
https://proceedings.mlr.press/v202/dang23b.html | https://proceedings.mlr.press/v202/dang23b/dang23b.pdf | https://openreview.net/forum?id=jjpsFetXJp | Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data | https://proceedings.mlr.press/v202/dang23b.html | Hien Dang, Tho Tran Huu, Stanley Osher, Hung The Tran, Nhat Ho, Tan Minh Nguyen | https://proceedings.mlr.press/v202/dang23b.html | ICML 2023 | Modern deep neural networks have achieved impressive performance on tasks from image classification to natural language processing. Surprisingly, these complex systems with massive amounts of parameters exhibit the same structural properties in their last-layer features and classifiers across canonical datasets when training until convergence. In particular, it has been observed that the last-layer features collapse to their class-means, and those class-means are the vertices of a simplex Equiangular Tight Frame (ETF). This phenomenon is known as Neural Collapse (NC). Recent papers have theoretically shown that NC emerges in the global minimizers of training problems with the simplified “unconstrained feature model”. In this context, we take a step further and prove the NC occurrences in deep linear networks for the popular mean squared error (MSE) and cross entropy (CE) losses, showing that global solutions exhibit NC properties across the linear layers. Furthermore, we extend our study to imbalanced data for MSE loss and present the first geometric analysis of NC under bias-free setting. Our results demonstrate the convergence of the last-layer features and classifiers to a geometry consisting of orthogonal vectors, whose lengths depend on the amount of data in their corresponding classes. Finally, we empirically validate our theoretical analyses on synthetic and practical network architectures with both balanced and imbalanced scenarios. |
https://proceedings.mlr.press/v202/dann23a.html | https://proceedings.mlr.press/v202/dann23a/dann23a.pdf | https://openreview.net/forum?id=skDVsmXjPR | Reinforcement Learning Can Be More Efficient with Multiple Rewards | https://proceedings.mlr.press/v202/dann23a.html | Christoph Dann, Yishay Mansour, Mehryar Mohri | https://proceedings.mlr.press/v202/dann23a.html | ICML 2023 | Reward design is one of the most critical and challenging aspects when formulating a task as a reinforcement learning (RL) problem. In practice, it often takes several attempts of reward specification and learning with it in order to find one that leads to sample-efficient learning of the desired behavior. Instead, in this work, we study whether directly incorporating multiple alternate reward formulations of the same task in a single agent can lead to faster learning. We analyze multi-reward extensions of action-elimination algorithms and prove more favorable instance-dependent regret bounds compared to their single-reward counterparts, both in multi-armed bandits and in tabular Markov decision processes. Our bounds scale for each state-action pair with the inverse of the largest gap among all reward functions. This suggests that learning with multiple rewards can indeed be more sample-efficient, as long as the rewards agree on an optimal policy. We further prove that when rewards do not agree, multi-reward action elimination in multi-armed bandits still learns a policy that is good across all reward functions. |
https://proceedings.mlr.press/v202/dann23b.html | https://proceedings.mlr.press/v202/dann23b/dann23b.pdf | https://openreview.net/forum?id=bUFUaawOTk | Best of Both Worlds Policy Optimization | https://proceedings.mlr.press/v202/dann23b.html | Christoph Dann, Chen-Yu Wei, Julian Zimmert | https://proceedings.mlr.press/v202/dann23b.html | ICML 2023 | Policy optimization methods are popular reinforcement learning algorithms in practice and recent works have build theoretical foundation for them by proving $\sqrt{T}$ regret bounds even when the losses are adversarial. Such bounds are tight in the worst case but often overly pessimistic. In this work, we show that by carefully designing the regularizer, bonus terms, and learning rates, one can achieve a more favorable $\text{polylog}(T)$ regret bound when the losses are stochastic, without sacrificing the worst-case guarantee in the adversarial regime. Specifically, we show the first best of both worlds guarantee for policy optimization in tabular MDPs by leveraging either a Tsallis entropy or a Shannon entropy regularizer. Then we show that under known transitions, we can further obtain a first-order regret bound in the adversarial regime by leveraging the log barrier regularizer. |
https://proceedings.mlr.press/v202/das23a.html | https://proceedings.mlr.press/v202/das23a/das23a.pdf | https://openreview.net/forum?id=dFflBEShcI | Image generation with shortest path diffusion | https://proceedings.mlr.press/v202/das23a.html | Ayan Das, Stathi Fotiadis, Anil Batra, Farhang Nabiei, Fengting Liao, Sattar Vakili, Da-Shan Shiu, Alberto Bernacchia | https://proceedings.mlr.press/v202/das23a.html | ICML 2023 | The field of image generation has made significant progress thanks to the introduction of Diffusion Models, which learn to progressively reverse a given image corruption. Recently, a few studies introduced alternative ways of corrupting images in Diffusion Models, with an emphasis on blurring. However, these studies are purely empirical and it remains unclear what is the optimal procedure for corrupting an image. In this work, we hypothesize that the optimal procedure minimizes the length of the path taken when corrupting an image towards a given final state. We propose the Fisher metric for the path length, measured in the space of probability distributions. We compute the shortest path according to this metric, and we show that it corresponds to a combination of image sharpening, rather than blurring, and noise deblurring. While the corruption was chosen arbitrarily in previous work, our Shortest Path Diffusion (SPD) determines uniquely the entire spatiotemporal structure of the corruption. We show that SPD improves on strong baselines without any hyperparameter tuning, and outperforms all previous Diffusion Models based on image blurring. Furthermore, any small deviation from the shortest path leads to worse performance, suggesting that SPD provides the optimal procedure to corrupt images. Our work sheds new light on observations made in recent works and provides a new approach to improve diffusion models on images and other types of data. |
https://proceedings.mlr.press/v202/das23b.html | https://proceedings.mlr.press/v202/das23b/das23b.pdf | https://openreview.net/forum?id=rWGp9FbS0Q | Efficient List-Decodable Regression using Batches | https://proceedings.mlr.press/v202/das23b.html | Abhimanyu Das, Ayush Jain, Weihao Kong, Rajat Sen | https://proceedings.mlr.press/v202/das23b.html | ICML 2023 | We demonstrate the use of batches in studying list-decodable linear regression, in which only $\alpha\in (0,1]$ fraction of batches contain genuine samples from a common distribution and the rest can contain arbitrary or even adversarial samples. When genuine batches have $\ge \tilde\Omega(1/\alpha)$ samples each, our algorithm can efficiently find a small list of potential regression parameters, with a high probability that one of them is close to the true parameter. This is the first polynomial time algorithm for list-decodable linear regression, and its sample complexity scales nearly linearly with the dimension of the covariates. The polynomial time algorithm is made possible by the batch structure and may not be feasible without it, as suggested by a recent Statistical Query lower bound (Diakonikolas et al., 2021b). |
https://proceedings.mlr.press/v202/das23c.html | https://proceedings.mlr.press/v202/das23c/das23c.pdf | https://openreview.net/forum?id=3QIUvovsgJ | Beyond Uniform Lipschitz Condition in Differentially Private Optimization | https://proceedings.mlr.press/v202/das23c.html | Rudrajit Das, Satyen Kale, Zheng Xu, Tong Zhang, Sujay Sanghavi | https://proceedings.mlr.press/v202/das23c.html | ICML 2023 | Most prior results on differentially private stochastic gradient descent (DP-SGD) are derived under the simplistic assumption of uniform Lipschitzness, i.e., the per-sample gradients are uniformly bounded. We generalize uniform Lipschitzness by assuming that the per-sample gradients have sample-dependent upper bounds, i.e., per-sample Lipschitz constants, which themselves may be unbounded. We provide principled guidance on choosing the clip norm in DP-SGD for convex over-parameterized settings satisfying our general version of Lipschitzness when the per-sample Lipschitz constants are bounded; specifically, we recommend tuning the clip norm only till values up to the minimum per-sample Lipschitz constant. This finds application in the private training of a softmax layer on top of a deep network pre-trained on public data. We verify the efficacy of our recommendation via experiments on 8 datasets. Furthermore, we provide new convergence results for DP-SGD on convex and nonconvex functions when the Lipschitz constants are unbounded but have bounded moments, i.e., they are heavy-tailed. |
https://proceedings.mlr.press/v202/das23d.html | https://proceedings.mlr.press/v202/das23d/das23d.pdf | https://openreview.net/forum?id=UL9purXHyB | Understanding Self-Distillation in the Presence of Label Noise | https://proceedings.mlr.press/v202/das23d.html | Rudrajit Das, Sujay Sanghavi | https://proceedings.mlr.press/v202/das23d.html | ICML 2023 | Self-distillation (SD) is the process of first training a "teacher" model and then using its predictions to train a "student" model that has the same architecture. Specifically, the student’s loss is $\big(\xi*\ell(\text{teacher’s predictions}, \text{ student’s predictions}) + (1-\xi)*\ell(\text{given labels}, \text{ student’s predictions})\big)$, where $\ell$ is the loss function and $\xi$ is some parameter $\in [0,1]$. SD has been empirically observed to provide performance gains in several settings. In this paper, we theoretically characterize the effect of SD in two supervised learning problems with noisy labels. We first analyze SD for regularized linear regression and show that in the high label noise regime, the optimal value of $\xi$ that minimizes the expected error in estimating the ground truth parameter is surprisingly greater than 1. Empirically, we show that $\xi > 1$ works better than $\xi \leq 1$ even with the cross-entropy loss for several classification datasets when 50% or 30% of the labels are corrupted. Further, we quantify when optimal SD is better than optimal regularization. Next, we analyze SD in the case of logistic regression for binary classification with random label corruption and quantify the range of label corruption in which the student outperforms the teacher (w.r.t. accuracy). To our knowledge, this is the first result of its kind for the cross-entropy loss. |
https://proceedings.mlr.press/v202/datta23a.html | https://proceedings.mlr.press/v202/datta23a/datta23a.pdf | https://openreview.net/forum?id=oqkckmjCYp | Interval Bound Interpolation for Few-shot Learning with Few Tasks | https://proceedings.mlr.press/v202/datta23a.html | Shounak Datta, Sankha Subhra Mullick, Anish Chakrabarty, Swagatam Das | https://proceedings.mlr.press/v202/datta23a.html | ICML 2023 | Few-shot learning aims to transfer the knowledge acquired from training on a diverse set of tasks to unseen tasks from the same task distribution, with a limited amount of labeled data. The underlying requirement for effective few-shot generalization is to learn a good representation of the task manifold. This becomes more difficult when only a limited number of tasks are available for training. In such a few-task few-shot setting, it is beneficial to explicitly preserve the local neighborhoods from the task manifold and exploit this to generate artificial tasks for training. To this end, we introduce the notion of interval bounds from the provably robust training literature to few-shot learning. The interval bounds are used to characterize neighborhoods around the training tasks. These neighborhoods can then be preserved by minimizing the distance between a task and its respective bounds. We then use a novel strategy to artificially form new tasks for training by interpolating between the available tasks and their respective interval bounds. We apply our framework to both model-agnostic meta-learning as well as prototype-based metric-learning paradigms. The efficacy of our proposed approach is evident from the improved performance on several datasets from diverse domains in comparison to recent methods. |
https://proceedings.mlr.press/v202/daulton23a.html | https://proceedings.mlr.press/v202/daulton23a/daulton23a.pdf | https://openreview.net/forum?id=aX9jtC2lfS | Hypervolume Knowledge Gradient: A Lookahead Approach for Multi-Objective Bayesian Optimization with Partial Information | https://proceedings.mlr.press/v202/daulton23a.html | Sam Daulton, Maximilian Balandat, Eytan Bakshy | https://proceedings.mlr.press/v202/daulton23a.html | ICML 2023 | Bayesian optimization is a popular method for sample efficient multi-objective optimization. However, existing Bayesian optimization techniques fail to effectively exploit common and often-neglected problem structure such as decoupled evaluations, where objectives can be queried independently from one another and each may consume different resources, or multi-fidelity evaluations, where lower fidelity-proxies of the objectives can be evaluated at lower cost. In this work, we propose a general one-step lookahead acquisition function based on the Knowledge Gradient that addresses the complex question of what to evaluate when and at which design points in a principled Bayesian decision-theoretic fashion. Hence, our approach naturally addresses decoupled, multi-fidelity, and standard multi-objective optimization settings in a unified Bayesian decision making framework. By construction, our method is the one-step Bayes-optimal policy for hypervolume maximization. Empirically, we demonstrate that our method improves sample efficiency in a wide variety of synthetic and real-world problems. Furthermore, we show that our method is general-purpose and yields competitive performance in standard (potentially noisy) multi-objective optimization. |
https://proceedings.mlr.press/v202/davies23a.html | https://proceedings.mlr.press/v202/davies23a/davies23a.pdf | https://openreview.net/forum?id=OUjObDqOM2 | Fast Combinatorial Algorithms for Min Max Correlation Clustering | https://proceedings.mlr.press/v202/davies23a.html | Sami Davies, Benjamin Moseley, Heather Newman | https://proceedings.mlr.press/v202/davies23a.html | ICML 2023 | We introduce fast algorithms for correlation clustering with respect to the Min Max objective that provide constant factor approximations on complete graphs. Our algorithms are the first purely combinatorial approximation algorithms for this problem. We construct a novel semi-metric on the set of vertices, which we call the correlation metric, that indicates to our clustering algorithms whether pairs of nodes should be in the same cluster. The paper demonstrates empirically that, compared to prior work, our algorithms sacrifice little in the objective quality to obtain significantly better run-time. Moreover, our algorithms scale to larger networks that are effectively intractable for known algorithms. |
https://proceedings.mlr.press/v202/davies23b.html | https://proceedings.mlr.press/v202/davies23b/davies23b.pdf | https://openreview.net/forum?id=UTtYSDO1MK | Predictive Flows for Faster Ford-Fulkerson | https://proceedings.mlr.press/v202/davies23b.html | Sami Davies, Benjamin Moseley, Sergei Vassilvitskii, Yuyan Wang | https://proceedings.mlr.press/v202/davies23b.html | ICML 2023 | Recent work has shown that leveraging learned predictions can improve the running time of algorithms for bipartite matching and similar combinatorial problems. In this work, we build on this idea to improve the performance of the widely used Ford-Fulkerson algorithm for computing maximum flows by seeding Ford-Fulkerson with predicted flows. Our proposed method offers strong theoretical performance in terms of the quality of the prediction. We then consider image segmentation, a common use-case of flows in computer vision, and complement our theoretical analysis with strong empirical results. |
https://proceedings.mlr.press/v202/davies23c.html | https://proceedings.mlr.press/v202/davies23c/davies23c.pdf | https://openreview.net/forum?id=UeCasRZMj5 | The Persistent Laplacian for Data Science: Evaluating Higher-Order Persistent Spectral Representations of Data | https://proceedings.mlr.press/v202/davies23c.html | Thomas Davies, Zhengchao Wan, Ruben J Sanchez-Garcia | https://proceedings.mlr.press/v202/davies23c.html | ICML 2023 | Persistent homology is arguably the most successful technique in Topological Data Analysis. It combines homology, a topological feature of a data set, with persistence, which tracks the evolution of homology over different scales. The persistent Laplacian is a recent theoretical development that combines persistence with the combinatorial Laplacian, the higher-order extension of the well-known graph Laplacian. Crucially, the Laplacian encode both the homology of a data set, and some additional geometric information not captured by the homology. Here, we provide the first investigation into the efficacy of the persistence Laplacian as an embedding of data for downstream classification and regression tasks. We extend the persistent Laplacian to cubical complexes so it can be used on images, then evaluate its performance as an embedding method on the MNIST and MoleculeNet datasets, demonstrating that it consistently outperforms persistent homology across tasks. |
https://proceedings.mlr.press/v202/daw23a.html | https://proceedings.mlr.press/v202/daw23a/daw23a.pdf | https://openreview.net/forum?id=rhvb4kprWB | Mitigating Propagation Failures in Physics-informed Neural Networks using Retain-Resample-Release (R3) Sampling | https://proceedings.mlr.press/v202/daw23a.html | Arka Daw, Jie Bu, Sifan Wang, Paris Perdikaris, Anuj Karpatne | https://proceedings.mlr.press/v202/daw23a.html | ICML 2023 | Despite the success of physics-informed neural networks (PINNs) in approximating partial differential equations (PDEs), PINNs can sometimes fail to converge to the correct solution in problems involving complicated PDEs. This is reflected in several recent studies on characterizing the "failure modes" of PINNs, although a thorough understanding of the connection between PINN failure modes and sampling strategies is missing. In this paper, we provide a novel perspective of failure modes of PINNs by hypothesizing that training PINNs relies on successful "propagation" of solution from initial and/or boundary condition points to interior points. We show that PINNs with poor sampling strategies can get stuck at trivial solutions if there are propagation failures, characterized by highly imbalanced PDE residual fields. To mitigate propagation failures, we propose a novel Retain-Resample-Release sampling (R3) algorithm that can incrementally accumulate collocation points in regions of high PDE residuals with little to no computational overhead. We provide an extension of R3 sampling to respect the principle of causality while solving time-dependent PDEs. We theoretically analyze the behavior of R3 sampling and empirically demonstrate its efficacy and efficiency in comparison with baselines on a variety of PDE problems. |
https://proceedings.mlr.press/v202/dbouk23a.html | https://proceedings.mlr.press/v202/dbouk23a/dbouk23a.pdf | https://openreview.net/forum?id=2aytHX3LRf | On the Robustness of Randomized Ensembles to Adversarial Perturbations | https://proceedings.mlr.press/v202/dbouk23a.html | Hassan Dbouk, Naresh Shanbhag | https://proceedings.mlr.press/v202/dbouk23a.html | ICML 2023 | Randomized ensemble classifiers (RECs), where one classifier is randomly selected during inference, have emerged as an attractive alternative to traditional ensembling methods for realizing adversarially robust classifiers with limited compute requirements. However, recent works have shown that existing methods for constructing RECs are more vulnerable than initially claimed, casting major doubts on their efficacy and prompting fundamental questions such as: "When are RECs useful?", "What are their limits?", and "How do we train them?". In this work, we first demystify RECs as we derive fundamental results regarding their theoretical limits, necessary and sufficient conditions for them to be useful, and more. Leveraging this new understanding, we propose a new boosting algorithm (BARRE) for training robust RECs, and empirically demonstrate its effectiveness at defending against strong $\ell_\infty$ norm-bounded adversaries across various network architectures and datasets. Our code can be found at https://github.com/hsndbk4/BARRE. |
https://proceedings.mlr.press/v202/de-jong23a.html | https://proceedings.mlr.press/v202/de-jong23a/de-jong23a.pdf | https://openreview.net/forum?id=nlUAvrMbUZ | Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute | https://proceedings.mlr.press/v202/de-jong23a.html | Michiel De Jong, Yury Zemlyanskiy, Nicholas Fitzgerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William W. Cohen | https://proceedings.mlr.press/v202/de-jong23a.html | ICML 2023 | Retrieval-augmented language models such as Fusion-in-Decoder are powerful, setting the state of the art on a variety of knowledge-intensive tasks. However, they are also expensive, due to the need to encode a large number of retrieved passages. Some work avoids this cost by pre-encoding a text corpus into a memory and retrieving dense representations directly. However, pre-encoding memory incurs a severe quality penalty as the memory representations are not conditioned on the current input. We propose LUMEN, a hybrid between these two extremes, pre-computing the majority of the retrieval representation and completing the encoding on the fly using a live encoder that is conditioned on the question and fine-tuned for the task. We show that LUMEN significantly outperforms pure memory on multiple question-answering tasks while being much cheaper than FiD, and outperforms both for any given compute budget. Moreover, the advantage of LUMEN over FiD increases with model size. |
https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html | https://proceedings.mlr.press/v202/de-oliveira-fonseca23a/de-oliveira-fonseca23a.pdf | https://openreview.net/forum?id=RnZhB7kNl0 | Continuous Spatiotemporal Transformer | https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html | Antonio Henrique De Oliveira Fonseca, Emanuele Zappala, Josue Ortega Caro, David Van Dijk | https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html | ICML 2023 | Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fundamentally discrete time and space models and thus have no guarantees regarding continuous sampling. To address this challenge, we present the Continuous Spatiotemporal Transformer (CST), a new transformer architecture that is designed for modeling of continuous systems. This new framework guarantees a continuous and smooth output via optimization in Sobolev space. We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data. |
https://proceedings.mlr.press/v202/de-silva23a.html | https://proceedings.mlr.press/v202/de-silva23a/de-silva23a.pdf | https://openreview.net/forum?id=8D3SsQlRbY | The Value of Out-of-Distribution Data | https://proceedings.mlr.press/v202/de-silva23a.html | Ashwin De Silva, Rahul Ramesh, Carey Priebe, Pratik Chaudhari, Joshua T Vogelstein | https://proceedings.mlr.press/v202/de-silva23a.html | ICML 2023 | Generalization error always improves with more in-distribution data. However, it is an open question what happens as we add out-of-distribution (OOD) data. Intuitively, if the OOD data is quite different, it seems more data would harm generalization error, though if the OOD data are sufficiently similar, much empirical evidence suggests that OOD data can actually improve generalization error. We show a counter-intuitive phenomenon: the generalization error of a task can be a non-monotonic function of the amount of OOD data. Specifically, we prove that generalization error can improve with small amounts of OOD data, and then get worse than no OOD data with larger amounts. In other words, there is value in training on small amounts of OOD data. We analytically demonstrate these results via Fisher’s Linear Discriminant on synthetic datasets, and empirically demonstrate them via deep networks on computer vision benchmarks such as MNIST, CIFAR-10, CINIC-10, PACS and DomainNet. In the idealistic setting where we know which samples are OOD, we show that these non-monotonic trends can be exploited using an appropriately weighted objective of the target and OOD empirical risk. While its practical utility is limited, this does suggest that if we can detect OOD samples, then there may be ways to benefit from them. When we do not know which samples are OOD, we show how a number of go-to strategies such as data-augmentation, hyper-parameter optimization and pre-training are not enough to ensure that the target generalization error does not deteriorate with the number of OOD samples in the dataset. |
https://proceedings.mlr.press/v202/de-sousa-ribeiro23a.html | https://proceedings.mlr.press/v202/de-sousa-ribeiro23a/de-sousa-ribeiro23a.pdf | https://openreview.net/forum?id=DA0PROpwan | High Fidelity Image Counterfactuals with Probabilistic Causal Models | https://proceedings.mlr.press/v202/de-sousa-ribeiro23a.html | Fabio De Sousa Ribeiro, Tian Xia, Miguel Monteiro, Nick Pawlowski, Ben Glocker | https://proceedings.mlr.press/v202/de-sousa-ribeiro23a.html | ICML 2023 | We present a general causal generative modelling framework for accurate estimation of high fidelity image counterfactuals with deep structural causal models. Estimation of interventional and counterfactual queries for high-dimensional structured variables, such as images, remains a challenging task. We leverage ideas from causal mediation analysis and advances in generative modelling to design new deep causal mechanisms for structured variables in causal models. Our experiments demonstrate that our proposed mechanisms are capable of accurate abduction and estimation of direct, indirect and total effects as measured by axiomatic soundness of counterfactuals. |
https://proceedings.mlr.press/v202/dedieu23a.html | https://proceedings.mlr.press/v202/dedieu23a/dedieu23a.pdf | https://openreview.net/forum?id=VTkBZayJos | Learning Noisy OR Bayesian Networks with Max-Product Belief Propagation | https://proceedings.mlr.press/v202/dedieu23a.html | Antoine Dedieu, Guangyao Zhou, Dileep George, Miguel Lazaro-Gredilla | https://proceedings.mlr.press/v202/dedieu23a.html | ICML 2023 | Noisy-OR Bayesian Networks (BNs) are a family of probabilistic graphical models which express rich statistical dependencies in binary data. Variational inference (VI) has been the main method proposed to learn noisy-OR BNs with complex latent structures (Jaakkola & Jordan, 1999; Ji et al., 2020; Buhai et al., 2020). However, the proposed VI approaches either (a) use a recognition network with standard amortized inference that cannot induce "explaining-away"; or (b) assume a simple mean-field (MF) posterior which is vulnerable to bad local optima. Existing MF VI methods also update the MF parameters sequentially which makes them inherently slow. In this paper, we propose parallel max-product as an alternative algorithm for learning noisy-OR BNs with complex latent structures and we derive a fast stochastic training scheme that scales to large datasets. We evaluate both approaches on several benchmarks where VI is the state-of-the-art and show that our method (a) achieves better test performance than Ji et al. (2020) for learning noisy-OR BNs with hierarchical latent structures on large sparse real datasets; (b) recovers a higher number of ground truth parameters than Buhai et al. (2020) from cluttered synthetic scenes; and (c) solves the 2D blind deconvolution problem from Lazaro-Gredilla et al. (2021) and variants - including binary matrix factorization - while VI catastrophically fails and is up to two orders of magnitude slower. |
https://proceedings.mlr.press/v202/defazio23a.html | https://proceedings.mlr.press/v202/defazio23a/defazio23a.pdf | https://openreview.net/forum?id=GXZ6cT5cvY | Learning-Rate-Free Learning by D-Adaptation | https://proceedings.mlr.press/v202/defazio23a.html | Aaron Defazio, Konstantin Mishchenko | https://proceedings.mlr.press/v202/defazio23a.html | ICML 2023 | The speed of gradient descent for convex Lipschitz functions is highly dependent on the choice of learning rate. Setting the learning rate to achieve the optimal convergence rate requires knowing the distance D from the initial point to the solution set. In this work, we describe a single-loop method, with no back-tracking or line searches, which does not require knowledge of D yet asymptotically achieves the optimal rate of convergence for the complexity class of convex Lipschitz functions. Our approach is the first parameter-free method for this class without additional multiplicative log factors in the convergence rate. We present extensive experiments for SGD and Adam variants of our method, where the method automatically matches hand-tuned learning rates across more than a dozen diverse machine learning problems, including large-scale vision and language problems. Our method is practical, efficient and requires no additional function value or gradient evaluations each step. An implementation is provided in the supplementary material. |
https://proceedings.mlr.press/v202/dehghani23a.html | https://proceedings.mlr.press/v202/dehghani23a/dehghani23a.pdf | https://openreview.net/forum?id=Lhyy8H75KA | Scaling Vision Transformers to 22 Billion Parameters | https://proceedings.mlr.press/v202/dehghani23a.html | Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme Ruiz, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd Van Steenkiste, Gamaleldin Fathy Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Collier, Alexey A. Gritsenko, Vighnesh Birodkar, Cristina Nader Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetic, Dustin Tran, Thomas Kipf, Mario Lucic, Xiaohua Zhai, Daniel Keysers, Jeremiah J. Harmsen, Neil Houlsby | https://proceedings.mlr.press/v202/dehghani23a.html | ICML 2023 | The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters. Vision Transformers (ViT) have introduced the same architecture to image and video modelling, but these have not yet been successfully scaled to nearly the same degree; the largest dense ViT contains 4B parameters (Chen et al., 2022). We present a recipe for highly efficient and stable training of a 22B-parameter ViT (ViT-22B) and perform a wide variety of experiments on the resulting model. When evaluated on downstream tasks (often with a lightweight linear model on frozen features), ViT-22B demonstrates increasing performance with scale. We further observe other interesting benefits of scale, including an improved tradeoff between fairness and performance, state-of-the-art alignment to human visual perception in terms of shape/texture bias, and improved robustness. ViT-22B demonstrates the potential for "LLM-like" scaling in vision, and provides key steps towards getting there. |
https://proceedings.mlr.press/v202/delattre23a.html | https://proceedings.mlr.press/v202/delattre23a/delattre23a.pdf | https://openreview.net/forum?id=Z0ATKIJR8G | Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration | https://proceedings.mlr.press/v202/delattre23a.html | Blaise Delattre, Quentin Barthélemy, Alexandre Araujo, Alexandre Allauzen | https://proceedings.mlr.press/v202/delattre23a.html | ICML 2023 | Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge. In this paper we introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory and a new alternative to the Power iteration. Called the Gram iteration, our approach exhibits a superlinear convergence. First, we show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability. Then, it proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches. |
https://proceedings.mlr.press/v202/demirovic23a.html | https://proceedings.mlr.press/v202/demirovic23a/demirovic23a.pdf | https://openreview.net/forum?id=0rO3nlTlbG | Blossom: an Anytime Algorithm for Computing Optimal Decision Trees | https://proceedings.mlr.press/v202/demirovic23a.html | Emir Demirović, Emmanuel Hebrard, Louis Jean | https://proceedings.mlr.press/v202/demirovic23a.html | ICML 2023 | We propose a simple algorithm to learn optimal decision trees of bounded depth. This algorithm is essentially an anytime version of the state-of-the-art dynamic programming approach. It has virtually no overhead compared to heuristic methods and is comparable to the best exact methods to prove optimality on most data sets. Experiments show that whereas existing exact methods hardly scale to deep trees, this algorithm learns trees comparable to standard heuristics without computational overhead, and can significantly improve their accuracy when given more computation time, even for deep trees. |
https://proceedings.mlr.press/v202/deng23a.html | https://proceedings.mlr.press/v202/deng23a/deng23a.pdf | https://openreview.net/forum?id=BTwEqF0s34 | Optimizing NOTEARS Objectives via Topological Swaps | https://proceedings.mlr.press/v202/deng23a.html | Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Kumar Ravikumar | https://proceedings.mlr.press/v202/deng23a.html | ICML 2023 | Recently, an intriguing class of non-convex optimization problems has emerged in the context of learning directed acyclic graphs (DAGs). These problems involve minimizing a given loss or score function, subject to a non-convex continuous constraint that penalizes the presence of cycles in a graph. In this work, we delve into the optimality challenges associated with this class of non-convex programs. To address these challenges, we propose a bi-level algorithm that leverages the non-convex constraint in a novel way. The outer level of the algorithm optimizes over topological orders by iteratively swapping pairs of nodes within the topological order of a DAG. A key innovation of our approach is the development of an effective method for generating a set of candidate swapping pairs for each iteration. At the inner level, given a topological order, we utilize off-the-shelf solvers that can handle linear constraints. The key advantage of our proposed algorithm is that it is guaranteed to find a local minimum or a KKT point under weaker conditions compared to previous work and finds solutions with lower scores. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches in terms of achieving a better score. Additionally, our method can also be used as a post-processing algorithm to significantly improve the score of other algorithms. Code implementing the proposed method is available at https://github.com/duntrain/topo. |
https://proceedings.mlr.press/v202/deng23b.html | https://proceedings.mlr.press/v202/deng23b/deng23b.pdf | https://openreview.net/forum?id=kbbpaKhXmN | Uncertainty Estimation by Fisher Information-based Evidential Deep Learning | https://proceedings.mlr.press/v202/deng23b.html | Danruo Deng, Guangyong Chen, Yang Yu, Furui Liu, Pheng-Ann Heng | https://proceedings.mlr.press/v202/deng23b.html | ICML 2023 | Uncertainty estimation is a key factor that makes deep learning reliable in practical applications. Recently proposed evidential neural networks explicitly account for different uncertainties by treating the network’s outputs as evidence to parameterize the Dirichlet distribution, and achieve impressive performance in uncertainty estimation. However, for high data uncertainty samples but annotated with the one-hot label, the evidence-learning process for those mislabeled classes is over-penalized and remains hindered. To address this problem, we propose a novel method, Fisher Information-based Evidential Deep Learning ($\mathcal{I}$-EDL). In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focus on the representation learning of uncertain classes. The generalization ability of our network is further improved by optimizing the PAC-Bayesian bound. As demonstrated empirically, our proposed method consistently outperforms traditional EDL-related algorithms in multiple uncertainty estimation tasks, especially in the more challenging few-shot classification settings. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.