abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/bhattacharjee23a.html | https://proceedings.mlr.press/v202/bhattacharjee23a/bhattacharjee23a.pdf | https://openreview.net/forum?id=eq6NF2qaXg | Data-Copying in Generative Models: A Formal Framework | https://proceedings.mlr.press/v202/bhattacharjee23a.html | Robi Bhattacharjee, Sanjoy Dasgupta, Kamalika Chaudhuri | https://proceedings.mlr.press/v202/bhattacharjee23a.html | ICML 2023 | There has been some recent interest in detecting and addressing memorization of training data by deep neural networks. A formal framework for memorization in generative models, called “data-copying” was proposed by Meehan et. al (2020). We build upon their work to show that their framework may fail to detect certain kinds of blatant memorization. Motivated by this and the theory of non-parametric methods, we provide an alternative definition of data-copying that applies more locally. We provide a method to detect data-copying, and provably show that it works with high probability when enough data is available. We also provide lower bounds that characterize the sample requirement for reliable detection. |
https://proceedings.mlr.press/v202/biderman23a.html | https://proceedings.mlr.press/v202/biderman23a/biderman23a.pdf | https://openreview.net/forum?id=bpRTAnJ8LW | Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling | https://proceedings.mlr.press/v202/biderman23a.html | Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, Usvsn Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, Oskar Van Der Wal | https://proceedings.mlr.press/v202/biderman23a.html | ICML 2023 | How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introduce Pythia, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intend Pythia to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found at https://github.com/EleutherAI/pythia. |
https://proceedings.mlr.press/v202/bihani23a.html | https://proceedings.mlr.press/v202/bihani23a/bihani23a.pdf | https://openreview.net/forum?id=6vauERTFMb | StriderNet: A Graph Reinforcement Learning Approach to Optimize Atomic Structures on Rough Energy Landscapes | https://proceedings.mlr.press/v202/bihani23a.html | Vaibhav Bihani, Sahil Manchanda, Srikanth Sastry, Sayan Ranu, N M Anoop Krishnan | https://proceedings.mlr.press/v202/bihani23a.html | ICML 2023 | Optimization of atomic structures presents a challenging problem, due to their highly rough and non-convex energy landscape, with wide applications in the fields of drug design, materials discovery, and mechanics. Here, we present a graph reinforcement learning approach, StriderNet, that learns a policy to displace the atoms towards low energy configurations. We evaluate the performance of StriderNet on three complex atomic systems, namely, binary Lennard-Jones particles, calcium silicate hydrates gel, and disordered silicon. We show that StriderNet outperforms all classical optimization algorithms and enables the discovery of a lower energy minimum. In addition, StriderNet exhibits a higher rate of reaching minima with energies, as confirmed by the average over multiple realizations. Finally, we show that StriderNet exhibits inductivity to unseen system sizes that are an order of magnitude different from the training system. All the codes and datasets are available at https://github.com/M3RG-IITD/StriderNET. |
https://proceedings.mlr.press/v202/bilos23a.html | https://proceedings.mlr.press/v202/bilos23a/bilos23a.pdf | https://openreview.net/forum?id=OUWckW2g3j | Modeling Temporal Data as Continuous Functions with Stochastic Process Diffusion | https://proceedings.mlr.press/v202/bilos23a.html | Marin Biloš, Kashif Rasul, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann | https://proceedings.mlr.press/v202/bilos23a.html | ICML 2023 | Temporal data such as time series can be viewed as discretized measurements of the underlying function. To build a generative model for such data we have to model the stochastic process that governs it. We propose a solution by defining the denoising diffusion model in the function space which also allows us to naturally handle irregularly-sampled observations. The forward process gradually adds noise to functions, preserving their continuity, while the learned reverse process removes the noise and returns functions as new samples. To this end, we define suitable noise sources and introduce novel denoising and score-matching models. We show how our method can be used for multivariate probabilistic forecasting and imputation, and how our model can be interpreted as a neural process. |
https://proceedings.mlr.press/v202/bitterwolf23a.html | https://proceedings.mlr.press/v202/bitterwolf23a/bitterwolf23a.pdf | https://openreview.net/forum?id=ChniRIfpRR | In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation | https://proceedings.mlr.press/v202/bitterwolf23a.html | Julian Bitterwolf, Maximilian Müller, Matthias Hein | https://proceedings.mlr.press/v202/bitterwolf23a.html | ICML 2023 | Out-of-distribution (OOD) detection is the problem of identifying inputs which are unrelated to the in-distribution task. The OOD detection performance when the in-distribution (ID) is ImageNet-1K is commonly being tested on a small range of test OOD datasets. We find that most of the currently used test OOD datasets, including datasets from the open set recognition (OSR) literature, have severe issues: In some cases more than 50$%$ of the dataset contains objects belonging to one of the ID classes. These erroneous samples heavily distort the evaluation of OOD detectors. As a solution, we introduce with NINCO a novel test OOD dataset, each sample checked to be ID free, which with its fine-grained range of OOD classes allows for a detailed analysis of an OOD detector’s strengths and failure modes, particularly when paired with a number of synthetic “OOD unit-tests”. We provide detailed evaluations across a large set of architectures and OOD detection methods on NINCO and the unit-tests, revealing new insights about model weaknesses and the effects of pretraining on OOD detection performance. We provide code and data at https://github.com/j-cb/NINCO. |
https://proceedings.mlr.press/v202/biza23a.html | https://proceedings.mlr.press/v202/biza23a/biza23a.pdf | https://openreview.net/forum?id=ZXeTCRZJp9 | Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames | https://proceedings.mlr.press/v202/biza23a.html | Ondrej Biza, Sjoerd Van Steenkiste, Mehdi S. M. Sajjadi, Gamaleldin Fathy Elsayed, Aravindh Mahendran, Thomas Kipf | https://proceedings.mlr.press/v202/biza23a.html | ICML 2023 | Automatically discovering composable abstractions from raw perceptual data is a long-standing challenge in machine learning. Recent slot-based neural networks that learn about objects in a self-supervised manner have made exciting progress in this direction. However, they typically fall short at adequately capturing spatial symmetries present in the visual world, which leads to sample inefficiency, such as when entangling object appearance and pose. In this paper, we present a simple yet highly effective method for incorporating spatial symmetries via slot-centric reference frames. We incorporate equivariance to per-object pose transformations into the attention and generation mechanism of Slot Attention by translating, scaling, and rotating position encodings. These changes result in little computational overhead, are easy to implement, and can result in large gains in terms of data efficiency and overall improvements to object discovery. We evaluate our method on a wide range of synthetic object discovery benchmarks namely CLEVR, Tetrominoes, CLEVRTex, Objects Room and MultiShapeNet, and show promising improvements on the challenging real-world Waymo Open dataset. |
https://proceedings.mlr.press/v202/black23a.html | https://proceedings.mlr.press/v202/black23a/black23a.pdf | https://openreview.net/forum?id=50SO1LwcYU | Understanding Oversquashing in GNNs through the Lens of Effective Resistance | https://proceedings.mlr.press/v202/black23a.html | Mitchell Black, Zhengchao Wan, Amir Nayyeri, Yusu Wang | https://proceedings.mlr.press/v202/black23a.html | ICML 2023 | Message passing graph neural networks (GNNs) are a popular learning architectures for graph-structured data. However, one problem GNNs experience is oversquashing, where a GNN has difficulty sending information between distant nodes. Understanding and mitigating oversquashing has recently received significant attention from the research community. In this paper, we continue this line of work by analyzing oversquashing through the lens of the effective resistance between nodes in the input graph. Effective resistance intuitively captures the “strength” of connection between two nodes by paths in the graph, and has a rich literature spanning many areas of graph theory. We propose to use total effective resistance as a bound of the total amount of oversquashing in a graph and provide theoretical justification for its use. We further develop an algorithm to identify edges to be added to an input graph to minimize the total effective resistance, thereby alleviating oversquashing. We provide empirical evidence of the effectiveness of our total effective resistance based rewiring strategies for improving the performance of GNNs. |
https://proceedings.mlr.press/v202/blake23a.html | https://proceedings.mlr.press/v202/blake23a/blake23a.pdf | https://openreview.net/forum?id=A8HOsNfish | Unit Scaling: Out-of-the-Box Low-Precision Training | https://proceedings.mlr.press/v202/blake23a.html | Charlie Blake, Douglas Orr, Carlo Luschi | https://proceedings.mlr.press/v202/blake23a.html | ICML 2023 | We present unit scaling, a paradigm for designing deep learning models that simplifies the use of low-precision number formats. Training in FP16 or the recently proposed FP8 formats offers substantial efficiency gains, but can lack sufficient range for out-of-the-box training. Unit scaling addresses this by introducing a principled approach to model numerics: seeking unit variance of all weights, activations and gradients at initialisation. Unlike alternative methods, this approach neither requires multiple training runs to find a suitable scale nor has significant computational overhead. We demonstrate the efficacy of unit scaling across a range of models and optimisers. We further show that existing models can be adapted to be unit-scaled, training BERT-Large in FP16 and then FP8 with no degradation in accuracy. |
https://proceedings.mlr.press/v202/blanke23a.html | https://proceedings.mlr.press/v202/blanke23a/blanke23a.pdf | https://openreview.net/forum?id=j9q5fadNpg | FLEX: an Adaptive Exploration Algorithm for Nonlinear Systems | https://proceedings.mlr.press/v202/blanke23a.html | Matthieu Blanke, Marc Lelarge | https://proceedings.mlr.press/v202/blanke23a.html | ICML 2023 | Model-based reinforcement learning is a powerful tool, but collecting data to fit an accurate model of the system can be costly. Exploring an unknown environment in a sample-efficient manner is hence of great importance. However, the complexity of dynamics and the computational limitations of real systems make this task challenging. In this work, we introduce FLEX, an exploration algorithm for nonlinear dynamics based on optimal experimental design. Our policy maximizes the information of the next step and results in an adaptive exploration algorithm, compatible with arbitrary parametric learning models, and requiring minimal computing resources. We test our method on a number of nonlinear environments covering different settings, including time-varying dynamics. Keeping in mind that exploration is intended to serve an exploitation objective, we also test our algorithm on downstream model-based classical control tasks and compare it to other state-of-the-art model-based and model-free approaches. The performance achieved by FLEX is competitive and its computational cost is low. |
https://proceedings.mlr.press/v202/blaser23a.html | https://proceedings.mlr.press/v202/blaser23a/blaser23a.pdf | https://openreview.net/forum?id=KRaczWbPSF | Not all Strongly Rayleigh Distributions Have Small Probabilistic Generating Circuits | https://proceedings.mlr.press/v202/blaser23a.html | Markus Bläser | https://proceedings.mlr.press/v202/blaser23a.html | ICML 2023 | Probabilistic modeling is a central task in machine learning. Probabilistic models should be tractable, i.e., allowing tractable probabilistic inference, but also efficient, i.e., being able to represent a large set of probability distributions. Zhang et al. (ICML 2021) recently proposed a new model, probabilistic generating circuits. They raised the question whether every strongly Rayleigh distribution can be efficiently represented by such circuits. We prove that this question has a negative answer, there are strongly Rayleigh distributions that cannot be represented by polynomial-sized probabilistic generating circuits, assuming a widely accepted complexity theoretic conjecture. |
https://proceedings.mlr.press/v202/bleistein23a.html | https://proceedings.mlr.press/v202/bleistein23a/bleistein23a.pdf | https://openreview.net/forum?id=5hoUVyc6MU | Learning the Dynamics of Sparsely Observed Interacting Systems | https://proceedings.mlr.press/v202/bleistein23a.html | Linus Bleistein, Adeline Fermanian, Anne-Sophie Jannot, Agathe Guilloux | https://proceedings.mlr.press/v202/bleistein23a.html | ICML 2023 | We address the problem of learning the dynamics of an unknown non-parametric system linking a target and a feature time series. The feature time series is measured on a sparse and irregular grid, while we have access to only a few points of the target time series. Once learned, we can use these dynamics to predict values of the target from the previous values of the feature time series. We frame this task as learning the solution map of a controlled differential equation (CDE). By leveraging the rich theory of signatures, we are able to cast this non-linear problem as a high-dimensional linear regression. We provide an oracle bound on the prediction error which exhibits explicit dependencies on the individual-specific sampling schemes. Our theoretical results are illustrated by simulations which show that our method outperforms existing algorithms for recovering the full time series while being computationally cheap. We conclude by demonstrating its potential on real-world epidemiological data. |
https://proceedings.mlr.press/v202/boehmer23a.html | https://proceedings.mlr.press/v202/boehmer23a/boehmer23a.pdf | https://openreview.net/forum?id=H8WXqZ7VZn | Subset Selection Based On Multiple Rankings in the Presence of Bias: Effectiveness of Fairness Constraints for Multiwinner Voting Score Functions | https://proceedings.mlr.press/v202/boehmer23a.html | Niclas Boehmer, L. Elisa Celis, Lingxiao Huang, Anay Mehrotra, Nisheeth K. Vishnoi | https://proceedings.mlr.press/v202/boehmer23a.html | ICML 2023 | We consider the problem of subset selection where one is given multiple rankings of items and the goal is to select the highest "quality" subset. Score functions from the multiwinner voting literature have been used to aggregate rankings into quality scores for subsets. We study this setting of subset selection problems when, in addition, rankings may contain systemic or unconscious biases toward a group of items. For a general model of input rankings and biases, we show that requiring the selected subset to satisfy group fairness constraints can improve the quality of the selection with respect to unbiased rankings. Importantly, we show that for fairness constraints to be effective, different multiwinner score functions may require a drastically different number of rankings: While for some functions, fairness constraints need an exponential number of rankings to recover a close-to-optimal solution, for others, this dependency is only polynomial. This result relies on a novel notion of "smoothness" of submodular functions in this setting that quantifies how well a function can "correctly" assess the quality of items in the presence of bias. The results in this paper can be used to guide the choice of multiwinner score functions for the subset selection setting considered here; we additionally provide a tool to empirically enable this. |
https://proceedings.mlr.press/v202/boehmer23b.html | https://proceedings.mlr.press/v202/boehmer23b/boehmer23b.pdf | https://openreview.net/forum?id=QwDUBbrvmB | Properties of the Mallows Model Depending on the Number of Alternatives: A Warning for an Experimentalist | https://proceedings.mlr.press/v202/boehmer23b.html | Niclas Boehmer, Piotr Faliszewski, Sonja Kraiczy | https://proceedings.mlr.press/v202/boehmer23b.html | ICML 2023 | The Mallows model is a popular distribution for ranked data. We empirically and theoretically analyze how the properties of rankings sampled from the Mallows model change when increasing the number of alternatives. We find that real-world data behaves differently from the Mallows model, yet is in line with its recent variant proposed by Boehmer et al. [IJCAI ’21]. As part of our study, we issue several warnings about using the classic Mallows model. For instance, we find that one should be extremely careful when using the Mallows model to generate data for experiments with a varying number of alternatives, as observed trends in such experiments might be due to the changing nature of the generated data. |
https://proceedings.mlr.press/v202/boetius23a.html | https://proceedings.mlr.press/v202/boetius23a/boetius23a.pdf | https://openreview.net/forum?id=z3hnQh5UJd | A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks | https://proceedings.mlr.press/v202/boetius23a.html | David Boetius, Stefan Leue, Tobias Sutter | https://proceedings.mlr.press/v202/boetius23a.html | ICML 2023 | Counterexample-guided repair aims at creating neural networks with mathematical safety guarantees, facilitating the application of neural networks in safety-critical domains. However, whether counterexample-guided repair is guaranteed to terminate remains an open question. We approach this question by showing that counterexample-guided repair can be viewed as a robust optimisation algorithm. While termination guarantees for neural network repair itself remain beyond our reach, we prove termination for more restrained machine learning models and disprove termination in a general setting. We empirically study the practical implications of our theoretical results, demonstrating the suitability of common verifiers and falsifiers for repair despite a disadvantageous theoretical result. Additionally, we use our theoretical insights to devise a novel algorithm for repairing linear regression models based on quadratic programming, surpassing existing approaches. |
https://proceedings.mlr.press/v202/bombari23a.html | https://proceedings.mlr.press/v202/bombari23a/bombari23a.pdf | https://openreview.net/forum?id=fZFNPf1QiF | Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels | https://proceedings.mlr.press/v202/bombari23a.html | Simone Bombari, Shayan Kiyani, Marco Mondelli | https://proceedings.mlr.press/v202/bombari23a.html | ICML 2023 | Machine learning models are vulnerable to adversarial perturbations, and a thought-provoking paper by Bubeck and Sellke has analyzed this phenomenon through the lens of over-parameterization: interpolating smoothly the data requires significantly more parameters than simply memorizing it. However, this "universal" law provides only a necessary condition for robustness, and it is unable to discriminate between models. In this paper, we address these gaps by focusing on empirical risk minimization in two prototypical settings, namely, random features and the neural tangent kernel (NTK). We prove that, for random features, the model is not robust for any degree of over-parameterization, even when the necessary condition coming from the universal law of robustness is satisfied. In contrast, for even activations, the NTK model meets the universal lower bound, and it is robust as soon as the necessary condition on over-parameterization is fulfilled. This also addresses a conjecture in prior work by Bubeck, Li and Nagaraj. Our analysis decouples the effect of the kernel of the model from an "interaction matrix", which describes the interaction with the test data and captures the effect of the activation. Our theoretical results are corroborated by numerical evidence on both synthetic and standard datasets (MNIST, CIFAR-10). |
https://proceedings.mlr.press/v202/bonet23a.html | https://proceedings.mlr.press/v202/bonet23a/bonet23a.pdf | https://openreview.net/forum?id=sixaiuoFnr | Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals | https://proceedings.mlr.press/v202/bonet23a.html | Clément Bonet, Benoı̂t Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty | https://proceedings.mlr.press/v202/bonet23a.html | ICML 2023 | When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires the usage of Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices, and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this discrepancy to brain-age prediction from MEG data, and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications. |
https://proceedings.mlr.press/v202/bonev23a.html | https://proceedings.mlr.press/v202/bonev23a/bonev23a.pdf | https://openreview.net/forum?id=TwsJ9IOZDx | Spherical Fourier Neural Operators: Learning Stable Dynamics on the Sphere | https://proceedings.mlr.press/v202/bonev23a.html | Boris Bonev, Thorsten Kurth, Christian Hundt, Jaideep Pathak, Maximilian Baust, Karthik Kashinath, Anima Anandkumar | https://proceedings.mlr.press/v202/bonev23a.html | ICML 2023 | Fourier Neural Operators (FNOs) have proven to be an efficient and effective method for resolution-independent operator learning in a broad variety of application areas across scientific machine learning. A key reason for their success is their ability to accurately model long-range dependencies in spatio-temporal data by learning global convolutions in a computationally efficient manner. To this end, FNOs rely on the discrete Fourier transform (DFT), however, DFTs cause visual and spectral artifacts as well as pronounced dissipation when learning operators in spherical coordinates by incorrectly assuming flat geometry. To overcome this limitation, we generalize FNOs on the sphere, introducing Spherical FNOs (SFNOs) for learning operators on spherical geometries. We apply SFNOs to forecasting atmo- spheric dynamics, and demonstrate stable autoregressive rollouts for a year of simulated time (1,460 steps), while retaining physically plausible dynamics. The SFNO has important implications for machine learning-based simulation of climate dynamics that could eventually help accelerate our response to climate change. |
https://proceedings.mlr.press/v202/boone23a.html | https://proceedings.mlr.press/v202/boone23a/boone23a.pdf | https://openreview.net/forum?id=vxyaYltes2 | The Regret of Exploration and the Control of Bad Episodes in Reinforcement Learning | https://proceedings.mlr.press/v202/boone23a.html | Victor Boone, Bruno Gaujal | https://proceedings.mlr.press/v202/boone23a.html | ICML 2023 | The first contribution of this paper is the introduction of a new performance measure of a RL algorithm that is more discriminating than the regret, that we call the regret of exploration that measures the asymptotic cost of exploration. The second contribution is a new performance test (PT) to end episodes in RL optimistic algorithms. This test is based on the performance of the current policy with respect to the best policy over the current confidence set. This is in contrast with all existing RL algorithms whose episode lengths are only based on the number of visits to the states. This modification does not harm the regret and brings an additional property. We show that while all current episodic RL algorithms have a linear regret of exploration, our method has a $O(\log{T})$ regret of exploration for non-degenerate deterministic MDPs. |
https://proceedings.mlr.press/v202/boopathy23a.html | https://proceedings.mlr.press/v202/boopathy23a/boopathy23a.pdf | https://openreview.net/forum?id=jQjteeywiR | Model-agnostic Measure of Generalization Difficulty | https://proceedings.mlr.press/v202/boopathy23a.html | Akhilan Boopathy, Kevin Liu, Jaedong Hwang, Shu Ge, Asaad Mohammedsaleh, Ila R Fiete | https://proceedings.mlr.press/v202/boopathy23a.html | ICML 2023 | The measure of a machine learning algorithm is the difficulty of the tasks it can perform, and sufficiently difficult tasks are critical drivers of strong machine learning models. However, quantifying the generalization difficulty of machine learning benchmarks has remained challenging. We propose what is to our knowledge the first model-agnostic measure of the inherent generalization difficulty of tasks. Our inductive bias complexity measure quantifies the total information required to generalize well on a task minus the information provided by the data. It does so by measuring the fractional volume occupied by hypotheses that generalize on a task given that they fit the training data. It scales exponentially with the intrinsic dimensionality of the space over which the model must generalize but only polynomially in resolution per dimension, showing that tasks which require generalizing over many dimensions are drastically more difficult than tasks involving more detail in fewer dimensions. Our measure can be applied to compute and compare supervised learning, reinforcement learning and meta-learning generalization difficulties against each other. We show that applied empirically, it formally quantifies intuitively expected trends, e.g. that in terms of required inductive bias, MNIST $<$ CIFAR10 $<$ Imagenet and fully observable Markov decision processes (MDPs) $<$ partially observable MDPs. Further, we show that classification of complex images $<$ few-shot meta-learning with simple images. Our measure provides a quantitative metric to guide the construction of more complex tasks requiring greater inductive bias, and thereby encourages the development of more sophisticated architectures and learning algorithms with more powerful generalization capabilities. |
https://proceedings.mlr.press/v202/bouabid23a.html | https://proceedings.mlr.press/v202/bouabid23a/bouabid23a.pdf | https://openreview.net/forum?id=Q3Rmfuj4vf | Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge | https://proceedings.mlr.press/v202/bouabid23a.html | Shahine Bouabid, Jake Fawkes, Dino Sejdinovic | https://proceedings.mlr.press/v202/bouabid23a.html | ICML 2023 | A directed acyclic graph (DAG) provides valuable prior knowledge that is often discarded in regression tasks in machine learning. We show that the independences arising from the presence of collider structures in DAGs provide meaningful inductive biases, which constrain the regression hypothesis space and improve predictive performance. We introduce collider regression, a framework to incorporate probabilistic causal knowledge from a collider in a regression problem. When the hypothesis space is a reproducing kernel Hilbert space, we prove a strictly positive generalisation benefit under mild assumptions and provide closed-form estimators of the empirical risk minimiser. Experiments on synthetic and climate model data demonstrate performance gains of the proposed methodology. |
https://proceedings.mlr.press/v202/boudiaf23a.html | https://proceedings.mlr.press/v202/boudiaf23a/boudiaf23a.pdf | https://openreview.net/forum?id=Yh9sFZQk7Y | In Search for a Generalizable Method for Source Free Domain Adaptation | https://proceedings.mlr.press/v202/boudiaf23a.html | Malik Boudiaf, Tom Denton, Bart Van Merrienboer, Vincent Dumoulin, Eleni Triantafillou | https://proceedings.mlr.press/v202/boudiaf23a.html | ICML 2023 | Source-free domain adaptation (SFDA) is compelling because it allows adapting an off-the-shelf model to a new domain using only unlabelled data. In this work, we apply existing SFDA techniques to a challenging set of naturally-occurring distribution shifts in bioacoustics, which are very different from the ones commonly studied in computer vision. We find existing methods perform differently relative to each other than observed in vision benchmarks, and sometimes perform worse than no adaptation at all. We propose a new simple method which outperforms the existing methods on our new shifts while exhibiting strong performance on a range of vision datasets. Our findings suggest that existing SFDA methods are not as generalizable as previously thought and that considering diverse modalities can be a useful avenue for designing more robust models. |
https://proceedings.mlr.press/v202/bouland23a.html | https://proceedings.mlr.press/v202/bouland23a/bouland23a.pdf | https://openreview.net/forum?id=aUkyV0lA2h | Quantum Speedups for Zero-Sum Games via Improved Dynamic Gibbs Sampling | https://proceedings.mlr.press/v202/bouland23a.html | Adam Bouland, Yosheb M Getachew, Yujia Jin, Aaron Sidford, Kevin Tian | https://proceedings.mlr.press/v202/bouland23a.html | ICML 2023 | We give a quantum algorithm for computing an $\epsilon$-approximate Nash equilibrium of a zero-sum game in a $m \times n$ payoff matrix with bounded entries. Given a standard quantum oracle for accessing the payoff matrix our algorithm runs in time $\widetilde{O}(\sqrt{m + n}\cdot \epsilon^{-2.5} + \epsilon^{-3})$ and outputs a classical representation of the $\epsilon$-approximate Nash equilibrium. This improves upon the best prior quantum runtime of $\widetilde{O}(\sqrt{m + n} \cdot \epsilon^{-3})$ obtained by [van Apeldoorn, Gilyen ’19] and the classical $\widetilde{O}((m + n) \cdot \epsilon^{-2})$ runtime due to [Grigoradis, Khachiyan ’95] whenever $\epsilon = \Omega((m +n)^{-1})$. We obtain this result by designing new quantum data structures for efficiently sampling from a slowly-changing Gibbs distribution. |
https://proceedings.mlr.press/v202/boutin23a.html | https://proceedings.mlr.press/v202/boutin23a/boutin23a.pdf | https://openreview.net/forum?id=Aev7tepsqx | Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines? | https://proceedings.mlr.press/v202/boutin23a.html | Victor Boutin, Thomas Fel, Lakshya Singhal, Rishav Mukherji, Akash Nagaraj, Julien Colin, Thomas Serre | https://proceedings.mlr.press/v202/boutin23a.html | ICML 2023 | An important milestone for AI is the development of algorithms that can produce drawings that are indistinguishable from those of humans. Here, we adapt the ”diversity vs. recognizability” scoring framework from Boutin et al (2022) and find that one-shot diffusion models have indeed started to close the gap between humans and machines. However, using a finer-grained measure of the originality of individual samples, we show that strengthening the guidance of diffusion models helps improve the humanness of their drawings, but they still fall short of approximating the originality and recognizability of human drawings. Comparing human category diagnostic features, collected through an online psychophysics experiment, against those derived from diffusion models reveals that humans rely on fewer and more localized features. Overall, our study suggests that diffusion models have significantly helped improve the quality of machine-generated drawings; however, a gap between humans and machines remains – in part explainable by discrepancies in visual strategies. |
https://proceedings.mlr.press/v202/bowling23a.html | https://proceedings.mlr.press/v202/bowling23a/bowling23a.pdf | https://openreview.net/forum?id=GtoeseQjtY | Settling the Reward Hypothesis | https://proceedings.mlr.press/v202/bowling23a.html | Michael Bowling, John D Martin, David Abel, Will Dabney | https://proceedings.mlr.press/v202/bowling23a.html | ICML 2023 | The reward hypothesis posits that, "all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward)." We aim to fully settle this hypothesis. This will not conclude with a simple affirmation or refutation, but rather specify completely the implicit requirements on goals and purposes under which the hypothesis holds. |
https://proceedings.mlr.press/v202/brack23a.html | https://proceedings.mlr.press/v202/brack23a/brack23a.pdf | https://openreview.net/forum?id=5e5ozhz2jF | ILLUME: Rationalizing Vision-Language Models through Human Interactions | https://proceedings.mlr.press/v202/brack23a.html | Manuel Brack, Patrick Schramowski, Björn Deiseroth, Kristian Kersting | https://proceedings.mlr.press/v202/brack23a.html | ICML 2023 | Bootstrapping from pre-trained language models has been proven to be an efficient approach for building vision-language models (VLM) for tasks such as image captioning or visual question answering. However, outputs of these models rarely align with user’s rationales for specific answers. In order to improve this alignment and reinforce commonsense reasons, we propose a tuning paradigm based on human interactions with machine-generated data. Our ILLUME executes the following loop: Given an image-question-answer prompt, the VLM samples multiple candidate rationales, and a human critic provides feedback via preference selection, used for fine-tuning. This loop increases the training data and gradually carves out the VLM’s rationalization capabilities that are aligned with human intent. Our exhaustive experiments demonstrate that ILLUME is competitive with standard supervised finetuning while using significantly fewer training data and only requiring minimal feedback. |
https://proceedings.mlr.press/v202/brady23a.html | https://proceedings.mlr.press/v202/brady23a/brady23a.pdf | https://openreview.net/forum?id=mGUJMqjDwE | Provably Learning Object-Centric Representations | https://proceedings.mlr.press/v202/brady23a.html | Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius Von Kügelgen, Wieland Brendel | https://proceedings.mlr.press/v202/brady23a.html | ICML 2023 | Learning structured representations of the visual world in terms of objects promises to significantly improve the generalization abilities of current machine learning models. While recent efforts to this end have shown promising empirical progress, a theoretical account of when unsupervised object-centric representation learning is possible is still lacking. Consequently, understanding the reasons for the success of existing object-centric methods as well as designing new theoretically grounded methods remains challenging. In the present work, we analyze when object-centric representations can provably be learned without supervision. To this end, we first introduce two assumptions on the generative process for scenes comprised of several objects, which we call compositionality and irreducibility. Under this generative process, we prove that the ground-truth object representations can be identified by an invertible and compositional inference model, even in the presence of dependencies between objects. We empirically validate our results through experiments on synthetic data. Finally, we provide evidence that our theory holds predictive power for existing object-centric models by showing a close correspondence between models’ compositionality and invertibility and their empirical identifiability. |
https://proceedings.mlr.press/v202/bravo-hermsdorff23a.html | https://proceedings.mlr.press/v202/bravo-hermsdorff23a/bravo-hermsdorff23a.pdf | https://openreview.net/forum?id=Z0yBZYQtIA | Quantifying Human Priors over Social and Navigation Networks | https://proceedings.mlr.press/v202/bravo-hermsdorff23a.html | Gecia Bravo-Hermsdorff | https://proceedings.mlr.press/v202/bravo-hermsdorff23a.html | ICML 2023 | Human knowledge is largely implicit and relational — do we have a friend in common? can I walk from here to there? In this work, we leverage the combinatorial structure of graphs to quantify human priors over such relational data. Our experiments focus on two domains that have been continuously relevant over evolutionary timescales: social interaction and spatial navigation. We find that some features of the inferred priors are remarkably consistent, such as the tendency for sparsity as a function of graph size. Other features are domain-specific, such as the propensity for triadic closure in social interactions. More broadly, our work demonstrates how nonclassical statistical analysis of indirect behavioral experiments can be used to efficiently model latent biases in the data. |
https://proceedings.mlr.press/v202/brechet23a.html | https://proceedings.mlr.press/v202/brechet23a/brechet23a.pdf | https://openreview.net/forum?id=S9kFcPHqHP | Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss | https://proceedings.mlr.press/v202/brechet23a.html | Pierre Bréchet, Katerina Papagiannouli, Jing An, Guido Montufar | https://proceedings.mlr.press/v202/brechet23a.html | ICML 2023 | We consider a deep matrix factorization model of covariance matrices trained with the Bures-Wasserstein distance. While recent works have made advances in the study of the optimization problem for overparametrized low-rank matrix approximation, much emphasis has been placed on discriminative settings and the square loss. In contrast, our model considers another type of loss and connects with the generative setting. We characterize the critical points and minimizers of the Bures-Wasserstein distance over the space of rank-bounded matrices. The Hessian of this loss at low-rank matrices can theoretically blow up, which creates challenges to analyze convergence of gradient optimization methods. We establish convergence results for gradient flow using a smooth perturbative version of the loss as well as convergence results for finite step size gradient descent under certain assumptions on the initial weights. |
https://proceedings.mlr.press/v202/bricken23a.html | https://proceedings.mlr.press/v202/bricken23a/bricken23a.pdf | https://openreview.net/forum?id=cxYaBAXVKg | Emergence of Sparse Representations from Noise | https://proceedings.mlr.press/v202/bricken23a.html | Trenton Bricken, Rylan Schaeffer, Bruno Olshausen, Gabriel Kreiman | https://proceedings.mlr.press/v202/bricken23a.html | ICML 2023 | A hallmark of biological neural networks, which distinguishes them from their artificial counterparts, is the high degree of sparsity in their activations. This discrepancy raises three questions our work helps to answer: (i) Why are biological networks so sparse? (ii) What are the benefits of this sparsity? (iii) How can these benefits be utilized by deep learning models? Our answers to all of these questions center around training networks to handle random noise. Surprisingly, we discover that noisy training introduces three implicit loss terms that result in sparsely firing neurons specializing to high variance features of the dataset. When trained to reconstruct noisy-CIFAR10, neurons learn biological receptive fields. More broadly, noisy training presents a new approach to potentially increase model interpretability with additional benefits to robustness and computational efficiency. |
https://proceedings.mlr.press/v202/bu23a.html | https://proceedings.mlr.press/v202/bu23a/bu23a.pdf | https://openreview.net/forum?id=31CAQtoT3w | Differentially Private Optimization on Large Model at Small Cost | https://proceedings.mlr.press/v202/bu23a.html | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis | https://proceedings.mlr.press/v202/bu23a.html | ICML 2023 | Differentially private (DP) optimization is the standard paradigm to learn large neural networks that are accurate and privacy-preserving. The computational cost for DP deep learning, however, is notoriously heavy due to the per-sample gradient clipping. Existing DP implementations are 2$\sim$1000$\times$ more costly in time and space complexity than the standard (non-private) training. In this work, we develop a novel Book-Keeping (BK) technique that implements existing DP optimizers (thus achieving the same accuracy), with a substantial improvement on the computational cost. Specifically, BK enables DP training on large models and high dimensional data to be roughly as fast and memory-saving as the standard training, whereas previous DP algorithms can be inefficient or incapable of training due to memory error. The computational advantage of BK is supported by the complexity analysis as well as extensive experiments on vision and language tasks. Our implementation achieves state-of-the-art (SOTA) accuracy with very small extra cost: on GPT2 and at almost the same memory cost ($<$1% overhead), BK has 1.03$\times$ the time complexity of the standard training (0.83$\times$ training speed in practice), and 0.61$\times$ the time complexity of the most efficient DP implementation (1.36$\times$ training speed in practice). We open-source the codebase for the BK algorithm at https://github.com/awslabs/fast-differential-privacy. |
https://proceedings.mlr.press/v202/bukharin23a.html | https://proceedings.mlr.press/v202/bukharin23a/bukharin23a.pdf | https://openreview.net/forum?id=HODGKcJ3ul | Machine Learning Force Fields with Data Cost Aware Training | https://proceedings.mlr.press/v202/bukharin23a.html | Alexander Bukharin, Tianyi Liu, Shengjie Wang, Simiao Zuo, Weihao Gao, Wen Yan, Tuo Zhao | https://proceedings.mlr.press/v202/bukharin23a.html | ICML 2023 | Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation, which finds widespread applications in chemistry and biomedical research. Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels generated by expensive quantum mechanical algorithms, which may scale as $O(n^3)$ to $O(n^7)$, with $n$ proportional to the number of basis functions. To address this issue, we propose a multi-stage computational framework – ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data. The motivation behind ASTEROID is that inaccurate data, though incurring large bias, can help capture the sophisticated structures of the underlying force field. Therefore, we first train a MLFF model on a large amount of inaccurate training data, employing a bias-aware loss function to prevent the model from overfitting the potential bias of this data. We then fine-tune the obtained model using a small amount of accurate training data, which preserves the knowledge learned from the inaccurate training data while significantly improving the model’s accuracy. Moreover, we propose a variant of ASTEROID based on score matching for the setting where the inaccurate training data are unlabeled. Extensive experiments on MD datasets and downstream tasks validate the efficacy of ASTEROID. Our code and data are available at https://github.com/abukharin3/asteroid. |
https://proceedings.mlr.press/v202/busa-fekete23a.html | https://proceedings.mlr.press/v202/busa-fekete23a/busa-fekete23a.pdf | https://openreview.net/forum?id=K1sJiHvy02 | Label differential privacy and private training data release | https://proceedings.mlr.press/v202/busa-fekete23a.html | Robert Istvan Busa-Fekete, Andres Munoz Medina, Umar Syed, Sergei Vassilvitskii | https://proceedings.mlr.press/v202/busa-fekete23a.html | ICML 2023 | We study differentially private mechanisms for sharing training data in machine learning settings. Our goal is to enable learning of an accurate predictive model while protecting the privacy of each user’s label. Previous work established privacy guarantees that assumed the features are public and given exogenously, a setting known as label differential privacy. In some scenarios, this can be a strong assumption that removes the interplay between features and labels from the privacy analysis. We relax this approach and instead assume the features are drawn from a distribution that depends on the private labels. We first show that simply adding noise to the label, as in previous work, can lead to an arbitrarily weak privacy guarantee, and also present methods for estimating this privacy loss from data. We then present a new mechanism that replaces some training examples with synthetically generated data, and show that our mechanism has a much better privacy-utility tradeoff if the synthetic data is ‘realistic’, in a certain quantifiable sense. Finally, we empirically validate our theoretical analysis. |
https://proceedings.mlr.press/v202/cabannes23a.html | https://proceedings.mlr.press/v202/cabannes23a/cabannes23a.pdf | https://openreview.net/forum?id=d2aohFmZoB | The SSL Interplay: Augmentations, Inductive Bias, and Generalization | https://proceedings.mlr.press/v202/cabannes23a.html | Vivien Cabannes, Bobak Kiani, Randall Balestriero, Yann Lecun, Alberto Bietti | https://proceedings.mlr.press/v202/cabannes23a.html | ICML 2023 | Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. % on the resulting performance in downstream tasks. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in kernel regimes, and highlight several insights for SSL practitioners that arise from our theory. |
https://proceedings.mlr.press/v202/cacciamani23a.html | https://proceedings.mlr.press/v202/cacciamani23a/cacciamani23a.pdf | https://openreview.net/forum?id=egNzERK8s3 | Online Mechanism Design for Information Acquisition | https://proceedings.mlr.press/v202/cacciamani23a.html | Federico Cacciamani, Matteo Castiglioni, Nicola Gatti | https://proceedings.mlr.press/v202/cacciamani23a.html | ICML 2023 | We study the problem of designing mechanisms for information acquisition scenarios. This setting models strategic interactions between a uniformed receiver and a set of informed senders. In our model the senders receive information about the underlying state of nature and communicate their observation (either truthfully or not) to the receiver, which, based on this information, selects an action. Our goal is to design mechanisms maximizing the receiver’s utility while incentivizing the senders to report truthfully their information. First, we provide an algorithm that efficiently computes an optimal incentive compatible (IC) mechanism. Then, we focus on the online problem in which the receiver sequentially interacts in an unknown game, with the objective of minimizing the cumulative regret w.r.t. the optimal IC mechanism, and the cumulative violation of the incentive compatibility constraints. We investigate two different online scenarios, i.e., the full and bandit feedback settings. For the full feedback problem, we propose an algorithm that guarantees $\tilde{O}(\sqrt{T})$ regret and violation, while for the bandit feedback setting we present an algorithm that attains $\tilde{O}(T^{\alpha})$ regret and $\tilde{O}(T^{1-\alpha/2})$ violation for any $\alpha \in [1/2, 1]$. Finally, we complement our results providing a tight lower bound. |
https://proceedings.mlr.press/v202/caggiano23a.html | https://proceedings.mlr.press/v202/caggiano23a/caggiano23a.pdf | https://openreview.net/forum?id=iYBTiYzN0A | MyoDex: A Generalizable Prior for Dexterous Manipulation | https://proceedings.mlr.press/v202/caggiano23a.html | Vittorio Caggiano, Sudeep Dasari, Vikash Kumar | https://proceedings.mlr.press/v202/caggiano23a.html | ICML 2023 | Human dexterity is a hallmark of motor control behaviors. Our hands can rapidly synthesize new behaviors despite the complexity (multi-articular and multi-joints, with 23 joints controlled by more than 40 muscles) of mosculoskeletal control. In this work, we take inspiration from how human dexterity builds on a diversity of prior experiences, instead of being acquired through a single task. Motivated by this observation, we set out to develop agents that can build upon previous experience to quickly acquire new (previously unattainable) behaviors. Specifically, our approach leverages multi-task learning to implicitly capture a task-agnostic behavioral priors (MyoDex) for human-like dexterity, using a physiologically realistic human hand model – MyoHand. We demonstrate MyoDex’s effectiveness in few-shot generalization as well as positive transfer to a large repertoire of unseen dexterous manipulation tasks. MyoDex can solve approximately 3x more tasks and it can accelerate the achievement of solutions by about 4x in comparison to a distillation baseline. While prior work has synthesized single musculoskeletal control behaviors, MyoDex is the first generalizable manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors. |
https://proceedings.mlr.press/v202/cagnetta23a.html | https://proceedings.mlr.press/v202/cagnetta23a/cagnetta23a.pdf | https://openreview.net/forum?id=Wz7a5MbBQa | What Can Be Learnt With Wide Convolutional Neural Networks? | https://proceedings.mlr.press/v202/cagnetta23a.html | Francesco Cagnetta, Alessandro Favero, Matthieu Wyart | https://proceedings.mlr.press/v202/cagnetta23a.html | ICML 2023 | Understanding how convolutional neural networks (CNNs) can efficiently learn high-dimensional functions remains a fundamental challenge. A popular belief is that these models harness the local and hierarchical structure of natural data such as images. Yet, we lack a quantitative understanding of how such structure affects performance, e.g., the rate of decay of the generalisation error with the number of training samples. In this paper, we study infinitely-wide deep CNNs in the kernel regime. First, we show that the spectrum of the corresponding kernel inherits the hierarchical structure of the network, and we characterise its asymptotics. Then, we use this result together with generalisation bounds to prove that deep CNNs adapt to the spatial scale of the target function. In particular, we find that if the target function depends on low-dimensional subsets of adjacent input variables, then the decay of the error is controlled by the effective dimensionality of these subsets. Conversely, if the target function depends on the full set of input variables, then the error decay is controlled by the input dimension. We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN with randomly-initialised parameters. Interestingly, we find that, despite their hierarchical structure, the functions generated by infinitely-wide deep CNNs are too rich to be efficiently learnable in high dimension. |
https://proceedings.mlr.press/v202/cai23a.html | https://proceedings.mlr.press/v202/cai23a/cai23a.pdf | https://openreview.net/forum?id=O1Hn6YF5IF | Causal Discovery with Latent Confounders Based on Higher-Order Cumulants | https://proceedings.mlr.press/v202/cai23a.html | Ruichu Cai, Zhiyi Huang, Wei Chen, Zhifeng Hao, Kun Zhang | https://proceedings.mlr.press/v202/cai23a.html | ICML 2023 | Causal discovery with latent confounders is an important but challenging task in many scientific areas. Despite the success of some overcomplete independent component analysis (OICA) based methods in certain domains, they are computationally expensive and can easily get stuck into local optima. We notice that interestingly, by making use of higher-order cumulants, there exists a closed-form solution to OICA in specific cases, e.g., when the mixing procedure follows the One-Latent-Component structure. In light of the power of the closed-form solution to OICA corresponding to the One-Latent-Component structure, we formulate a way to estimate the mixing matrix using the higher-order cumulants, and further propose the testable One-Latent-Component condition to identify the latent variables and determine causal orders. By iteratively removing the share identified latent components, we successfully extend the results on the One-Latent-Component structure to the Multi-Latent-Component structure and finally provide a practical and asymptotically correct algorithm to learn the causal structure with latent variables. Experimental results illustrate the asymptotic correctness and effectiveness of the proposed method. |
https://proceedings.mlr.press/v202/cai23b.html | https://proceedings.mlr.press/v202/cai23b/cai23b.pdf | https://openreview.net/forum?id=1EuHYKFPgA | On the Connection Between MPNN and Graph Transformer | https://proceedings.mlr.press/v202/cai23b.html | Chen Cai, Truong Son Hy, Rose Yu, Yusu Wang | https://proceedings.mlr.press/v202/cai23b.html | ICML 2023 | Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks. Previous work shows that with proper position embedding, GT can approximate MPNN arbitrarily well, implying that GT is at least as powerful as MPNN. In this paper, we study the inverse connection and show that MPNN with virtual node (VN), a commonly used heuristic with little theoretical understanding, is powerful enough to arbitrarily approximate the self-attention layer of GT. In particular, we first show that if we consider one type of linear transformer, the so-called Performer/Linear Transformer, then MPNN + VN with only $\mathcal{O}(1)$ depth and $\mathcal{O}(1)$ width can approximate a self-attention layer in Performer/Linear Transformer. Next, via a connection between MPNN + VN and DeepSets, we prove the MPNN + VN with $\mathcal{O}(n^d)$ width and $\mathcal{O}(1)$ depth can approximate the self-attention layer arbitrarily well, where $d$ is the input feature dimension. Lastly, under some assumptions, we provide an explicit construction of MPNN + VN with $\mathcal{O}(1)$ width and $\mathcal{O}(n)$ depth approximating the self-attention layer in GT arbitrarily well. On the empirical side, we demonstrate that 1) MPNN + VN is a surprisingly strong baseline, outperforming GT on the recently proposed Long Range Graph Benchmark (LRGB) dataset, 2) our MPNN + VN improves over early implementation on a wide range of OGB datasets and 3) MPNN + VN outperforms Linear Transformer and MPNN on the climate modeling task. |
https://proceedings.mlr.press/v202/cai23c.html | https://proceedings.mlr.press/v202/cai23c/cai23c.pdf | https://openreview.net/forum?id=SQtp4uUByd | Ske2Grid: Skeleton-to-Grid Representation Learning for Action Recognition | https://proceedings.mlr.press/v202/cai23c.html | Dongqi Cai, Yangyuxuan Kang, Anbang Yao, Yurong Chen | https://proceedings.mlr.press/v202/cai23c.html | ICML 2023 | This paper presents Ske2Grid, a new representation learning framework for improved skeleton-based action recognition. In Ske2Grid, we define a regular convolution operation upon a novel grid representation of human skeleton, which is a compact image-like grid patch constructed and learned through three novel designs. Specifically, we propose a graph-node index transform (GIT) to construct a regular grid patch through assigning the nodes in the skeleton graph one by one to the desired grid cells. To ensure that GIT is a bijection and enrich the expressiveness of the grid representation, an up-sampling transform (UPT) is learned to interpolate the skeleton graph nodes for filling the grid patch to the full. To resolve the problem when the one-step UPT is aggressive and further exploit the representation capability of the grid patch with increasing spatial size, a progressive learning strategy (PLS) is proposed which decouples the UPT into multiple steps and aligns them to multiple paired GITs through a compact cascaded design learned progressively. We construct networks upon prevailing graph convolution networks and conduct experiments on six mainstream skeleton-based action recognition datasets. Experiments show that our Ske2Grid significantly outperforms existing GCN-based solutions under different benchmark settings, without bells and whistles. Code and models are available at https://github.com/OSVAI/Ske2Grid. |
https://proceedings.mlr.press/v202/cai23d.html | https://proceedings.mlr.press/v202/cai23d/cai23d.pdf | https://openreview.net/forum?id=uI8l8AENlj | Extrapolated Random Tree for Regression | https://proceedings.mlr.press/v202/cai23d.html | Yuchao Cai, Yuheng Ma, Yiwei Dong, Hanfang Yang | https://proceedings.mlr.press/v202/cai23d.html | ICML 2023 | In this paper, we propose a novel tree-based algorithm named Extrapolated Random Tree for Regression (ERTR) that adapts to arbitrary smoothness of the regression function while maintaining the interpretability of the tree. We first put forward the homothetic random tree for regression (HRTR) that converges to the target function as the homothetic ratio approaches zero. Then ERTR uses a linear regression model to extrapolate HRTR estimations with different ratios to the ratio zero. From the theoretical perspective, we for the first time establish the optimal convergence rates for ERTR when the target function resides in the general Hölder space $C^{k,\alpha}$ for $k\in \mathbb{N}$, whereas the lower bound of the convergence rate of the random tree for regression (RTR) is strictly slower than ERTR in the space $C^{k,\alpha}$ for $k\geq 1$. This shows that ERTR outperforms RTR for the target function with high-order smoothness due to the extrapolation. In the experiments, we compare ERTR with state-of-the-art tree algorithms on real datasets to show the superior performance of our model. Moreover, promising improvements are brought by using the extrapolated trees as base learners in the extension of ERTR to ensemble methods. |
https://proceedings.mlr.press/v202/cai23e.html | https://proceedings.mlr.press/v202/cai23e/cai23e.pdf | https://openreview.net/forum?id=8hkpDHHP2O | Cyclic Block Coordinate Descent With Variance Reduction for Composite Nonconvex Optimization | https://proceedings.mlr.press/v202/cai23e.html | Xufeng Cai, Chaobing Song, Stephen Wright, Jelena Diakonikolas | https://proceedings.mlr.press/v202/cai23e.html | ICML 2023 | Nonconvex optimization is central in solving many machine learning problems, in which block-wise structure is commonly encountered. In this work, we propose cyclic block coordinate methods for nonconvex optimization problems with non-asymptotic gradient norm guarantees. Our convergence analysis is based on a gradient Lipschitz condition with respect to a Mahalanobis norm, inspired by a recent progress on cyclic block coordinate methods. In deterministic settings, our convergence guarantee matches the guarantee of (full-gradient) gradient descent, but with the gradient Lipschitz constant being defined w.r.t. a Mahalanobis norm. In stochastic settings, we use recursive variance reduction to decrease the per-iteration cost and match the arithmetic operation complexity of current optimal stochastic full-gradient methods, with a unified analysis for both finite-sum and infinite-sum cases. We prove a faster linear convergence result when a Polyak-Łojasiewicz (PŁ) condition holds. To our knowledge, this work is the first to provide non-asymptotic convergence guarantees — variance-reduced or not — for a cyclic block coordinate method in general composite (smooth + nonsmooth) nonconvex settings. Our experimental results demonstrate the efficacy of the proposed cyclic scheme in training deep neural nets. |
https://proceedings.mlr.press/v202/cai23f.html | https://proceedings.mlr.press/v202/cai23f/cai23f.pdf | https://openreview.net/forum?id=vkWwnJjcC6 | Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? | https://proceedings.mlr.press/v202/cai23f.html | Ruisi Cai, Zhenyu Zhang, Zhangyang Wang | https://proceedings.mlr.press/v202/cai23f.html | ICML 2023 | Given a robust model trained to be resilient to one or multiple types of distribution shifts (e.g., natural image corruptions), how is that "robustness" encoded in the model weights, and how easily can it be disentangled and/or "zero-shot" transferred to some other models? This paper empirically suggests a surprisingly simple answer: linearly - by straightforward model weight arithmetic! We start by drawing several key observations: (i) assuming that we train the same model architecture on both a clean dataset and its corrupted version, a comparison between the two resultant models shows their weights to mostly differ in shallow layers; (ii) the weight difference after projection, which we call "Robust Weight Signature" (RWS), appears to be discriminative and indicative of different corruption types; (iii) perhaps most strikingly, for the same corruption type, the RWSs obtained by one model architecture are highly consistent and transferable across different datasets. Based on those RWS observations, we propose a minimalistic model robustness "patching" framework that carries a model trained on clean data together with its pre-extracted RWSs. In this way, injecting certain robustness to the model is reduced to directly adding the corresponding RWS to its weight. We experimentally verify our proposed framework to be remarkably (1) lightweight. since RWSs concentrate on the shallowest few layers and we further show they can be painlessly quantized, storing an RWS is up to 13 x more compact than storing the full weight copy; (2) in-situ adjustable. RWSs can be appended as needed and later taken off to restore the intact clean model. We further demonstrate one can linearly re-scale the RWS to control the patched robustness strength; (3) composable. Multiple RWSs can be added simultaneously to patch more comprehensive robustness at once; and (4) transferable. Even when the clean model backbone is continually adapted or updated, RWSs remain as effective patches due to their outstanding cross-dataset transferability. |
https://proceedings.mlr.press/v202/cai23g.html | https://proceedings.mlr.press/v202/cai23g/cai23g.pdf | https://openreview.net/forum?id=Gd6iouUq1g | Doubly Optimal No-Regret Learning in Monotone Games | https://proceedings.mlr.press/v202/cai23g.html | Yang Cai, Weiqiang Zheng | https://proceedings.mlr.press/v202/cai23g.html | ICML 2023 | We consider online learning in multi-player smooth monotone games. Existing algorithms have limitations such as (1) being only applicable to strongly monotone games; (2) lacking the no-regret guarantee; (3) having only asymptotic or slow $\mathcal{O}(\frac{1}{\sqrt{T}})$ last-iterate convergence rate to a Nash equilibrium. While the $\mathcal{O}(\frac{1}{\sqrt{T}})$ rate is tight for a large class of algorithms including the well-studied extragradient algorithm and optimistic gradient algorithm, it is not optimal for all gradient-based algorithms. We propose the accelerated optimistic gradient (AOG) algorithm, the first doubly optimal no-regret learning algorithm for smooth monotone games. Namely, our algorithm achieves both (i) the optimal $\mathcal{O}(\sqrt{T})$ regret in the adversarial setting under smooth and convex loss functions and (ii) the optimal $\mathcal{O}(\frac{1}{T})$ last-iterate convergence rate to a Nash equilibrium in multi-player smooth monotone games. As a byproduct of the accelerated last-iterate convergence rate, we further show that each player suffers only an $\mathcal{O}(\log T)$ individual worst-case dynamic regret, providing an exponential improvement over the previous state-of-the-art $\mathcal{O}(\sqrt{T})$ bound. |
https://proceedings.mlr.press/v202/caliskan23a.html | https://proceedings.mlr.press/v202/caliskan23a/caliskan23a.pdf | https://openreview.net/forum?id=LVluQl5lAk | Multi-Agent Learning from Learners | https://proceedings.mlr.press/v202/caliskan23a.html | Mine Melodi Caliskan, Francesco Chini, Setareh Maghsudi | https://proceedings.mlr.press/v202/caliskan23a.html | ICML 2023 | A large body of the "Inverse Reinforcement Learning" (IRL) literature focuses on recovering the reward function from a set of demonstrations of an expert agent who acts optimally or noisily optimally. Nevertheless, some recent works move away from the optimality assumption to study the "Learning from a Learner (LfL)" problem, where the challenge is inferring the reward function of a learning agent from a sequence of demonstrations produced by progressively improving policies. In this work, we take one of the initial steps in addressing the multi-agent version of this problem and propose a new algorithm, MA-LfL (Multiagent Learning from a Learner). Unlike the state-of-the-art literature, which recovers the reward functions from trajectories produced by agents in some equilibrium, we study the problem of inferring the reward functions of interacting agents in a general sum stochastic game without assuming any equilibrium state. The MA-LfL algorithm is rigorously built on a theoretical result that ensures its validity in the case of agents learning according to a multi-agent soft policy iteration scheme. We empirically test MA-LfL and we observe high positive correlation between the recovered reward functions and the ground truth. |
https://proceedings.mlr.press/v202/cao23a.html | https://proceedings.mlr.press/v202/cao23a/cao23a.pdf | https://openreview.net/forum?id=2Mbo7IEtZW | Efficient Learning of Mesh-Based Physical Simulation with Bi-Stride Multi-Scale Graph Neural Network | https://proceedings.mlr.press/v202/cao23a.html | Yadi Cao, Menglei Chai, Minchen Li, Chenfanfu Jiang | https://proceedings.mlr.press/v202/cao23a.html | ICML 2023 | Learning the long-range interactions on large-scale mesh-based physical systems with flat Graph Neural Networks (GNNs) and stacking Message Passings (MPs) is challenging due to the scaling complexity w.r.t. the number of nodes and over-smoothing. Therefore, there has been growing interest in the community to introduce multi-scale structures to GNNs for physics simulation. However, current state-of-the-art methods are limited by their reliance on the labor-heavy drawing of coarser meshes or building coarser levels based on spatial proximity, which can introduce wrong edges across geometry boundaries. Inspired by the bipartite graph determination, we propose a novel pooling strategy, bi-stride to tackle the aforementioned limitations. Bi-stride pools nodes on every other frontier of the Breadth-First-Search (BFS), without the need for the manual drawing of coarser meshes and, avoid wrong edges introduced by spatial proximity. Additionally, it enables a reduced number of MP times on each level and the non-parametrized pooling and unpooling by interpolations, similar to convolutional Neural Networks (CNNs), which significantly reduces computational requirements. Experiments show that the proposed framework, BSMS-GNN, significantly outperforms existing methods in terms of both accuracy and computational efficiency in representative physics-based simulation scenarios. |
https://proceedings.mlr.press/v202/cao23b.html | https://proceedings.mlr.press/v202/cao23b/cao23b.pdf | https://openreview.net/forum?id=cg8EDdcIte | Variational Sparse Inverse Cholesky Approximation for Latent Gaussian Processes via Double Kullback-Leibler Minimization | https://proceedings.mlr.press/v202/cao23b.html | Jian Cao, Myeongjong Kang, Felix Jimenez, Huiyan Sang, Florian Tobias Schaefer, Matthias Katzfuss | https://proceedings.mlr.press/v202/cao23b.html | ICML 2023 | To achieve scalable and accurate inference for latent Gaussian processes, we propose a variational approximation based on a family of Gaussian distributions whose covariance matrices have sparse inverse Cholesky (SIC) factors. We combine this variational approximation of the posterior with a similar and efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior. We then focus on a particular SIC ordering and nearest-neighbor-based sparsity pattern resulting in highly accurate prior and posterior approximations. For this setting, our variational approximation can be computed via stochastic gradient descent in polylogarithmic time per iteration. We provide numerical comparisons showing that the proposed double-Kullback-Leibler-optimal Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate for stationary kernels than alternative approaches such as inducing-point and mean-field approximations at similar computational complexity. |
https://proceedings.mlr.press/v202/cao23c.html | https://proceedings.mlr.press/v202/cao23c/cao23c.pdf | https://openreview.net/forum?id=euCN0Xz5e9 | Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation | https://proceedings.mlr.press/v202/cao23c.html | Shengcao Cao, Mengtian Li, James Hays, Deva Ramanan, Yu-Xiong Wang, Liangyan Gui | https://proceedings.mlr.press/v202/cao23c.html | ICML 2023 | Resource-constrained perception systems such as edge computing and vision-for-robotics require vision models to be both accurate and lightweight in computation and memory usage. While knowledge distillation is a proven strategy to enhance the performance of lightweight classification models, its application to structured outputs like object detection and instance segmentation remains a complicated task, due to the variability in outputs and complex internal network modules involved in the distillation process. In this paper, we propose a simple yet surprisingly effective sequential approach to knowledge distillation that progressively transfers the knowledge of a set of teacher detectors to a given lightweight student. To distill knowledge from a highly accurate but complex teacher model, we construct a sequence of teachers to help the student gradually adapt. Our progressive strategy can be easily combined with existing detection distillation mechanisms to consistently maximize student performance in various settings. To the best of our knowledge, we are the first to successfully distill knowledge from Transformer-based teacher detectors to convolution-based students, and unprecedentedly boost the performance of ResNet-50 based RetinaNet from 36.5% to 42.0% AP and Mask R-CNN from 38.2% to 42.5% AP on the MS COCO benchmark. Code available at https://github.com/Shengcao-Cao/MTPD. |
https://proceedings.mlr.press/v202/cao23d.html | https://proceedings.mlr.press/v202/cao23d/cao23d.pdf | https://openreview.net/forum?id=H01CJWHAmw | One-sided Matrix Completion from Two Observations Per Row | https://proceedings.mlr.press/v202/cao23d.html | Steven Cao, Percy Liang, Gregory Valiant | https://proceedings.mlr.press/v202/cao23d.html | ICML 2023 | Given only a few observed entries from a low-rank matrix $X$, matrix completion is the problem of imputing the missing entries, and it formalizes a wide range of real-world settings that involve estimating missing data. However, when there are too few observed entries to complete the matrix, what other aspects of the underlying matrix can be reliably recovered? We study one such problem setting, that of “one-sided” matrix completion, where our goal is to recover the right singular vectors of $X$, even in the regime where recovering the left singular vectors is impossible, which arises when there are more rows than columns and very few observations. We propose a natural algorithm that involves imputing the missing values of the matrix $X^TX$ and show that even with only two observations per row in $X$, we can provably recover $X^TX$ as long as we have at least $\Omega(r^2 d \log d)$ rows, where $r$ is the rank and $d$ is the number of columns. We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing. In these settings, our algorithm substantially outperforms standard matrix completion and a variety of direct factorization methods. |
https://proceedings.mlr.press/v202/cardoso23a.html | https://proceedings.mlr.press/v202/cardoso23a/cardoso23a.pdf | https://openreview.net/forum?id=XTHxTHtlFU | State and parameter learning with PARIS particle Gibbs | https://proceedings.mlr.press/v202/cardoso23a.html | Gabriel Cardoso, Yazid Janati El Idrissi, Sylvain Le Corff, Eric Moulines, Jimmy Olsson | https://proceedings.mlr.press/v202/cardoso23a.html | ICML 2023 | Non-linear state-space models, also known as general hidden Markov models (HMM), are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences. Learning in HMM, either via Maximum Likelihood Estimation (MLE) or Markov Score Climbing (MSC) requires the estimation of the- smoothing expectation of some additive functionals. Controlling the bias and the variance of this estimation is crucial to establish the convergence of learning algorithms. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs (PPG) sampler, which can be viewed as a PaRIS (Olsson, Westerborn 2017) algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. We then establish, in the learning context, and under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao–Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims. |
https://proceedings.mlr.press/v202/carta23a.html | https://proceedings.mlr.press/v202/carta23a/carta23a.pdf | https://openreview.net/forum?id=feXm8GbxWU | Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning | https://proceedings.mlr.press/v202/carta23a.html | Thomas Carta, Clément Romac, Thomas Wolf, Sylvain Lamprier, Olivier Sigaud, Pierre-Yves Oudeyer | https://proceedings.mlr.press/v202/carta23a.html | ICML 2023 | Recent works successfully leveraged Large Language Models’ (LLM) abilities to capture abstract knowledge about world’s physics to solve decision-making problems. Yet, the alignment between LLMs’ knowledge and the environment can be wrong and limit functional competence due to lack of grounding. In this paper, we study an approach (named GLAM) to achieve this alignment through functional grounding: we consider an agent using an LLM as a policy that is progressively updated as the agent interacts with the environment, leveraging online Reinforcement Learning to improve its performance to solve goals. Using an interactive textual environment designed to study higher-level forms of functional grounding, and a set of spatial and navigation tasks, we study several scientific questions: 1) Can LLMs boost sample efficiency for online learning of various RL tasks? 2) How can it boost different forms of generalization? 3) What is the impact of online learning? We study these questions by functionally grounding several variants (size, architecture) of FLAN-T5. |
https://proceedings.mlr.press/v202/castanet23a.html | https://proceedings.mlr.press/v202/castanet23a/castanet23a.pdf | https://openreview.net/forum?id=h2AnGFDclp | Stein Variational Goal Generation for adaptive Exploration in Multi-Goal Reinforcement Learning | https://proceedings.mlr.press/v202/castanet23a.html | Nicolas Castanet, Olivier Sigaud, Sylvain Lamprier | https://proceedings.mlr.press/v202/castanet23a.html | ICML 2023 | In multi-goal Reinforcement Learning, an agent can share experience between related training tasks, resulting in better generalization for new tasks at test time. However, when the goal space has discontinuities and the reward is sparse, a majority of goals are difficult to reach. In this context, a curriculum over goals helps agents learn by adapting training tasks to their current capabilities. In this work, we propose Stein Variational Goal Generation (SVGG), which samples goals of intermediate difficulty for the agent, by leveraging a learned predictive model of its goal reaching capabilities. The distribution of goals is modeled with particles that are attracted in areas of appropriate difficulty using Stein Variational Gradient Descent. We show that SVGG outperforms state-of-the-art multi-goal Reinforcement Learning methods in terms of success coverage in hard exploration problems, and demonstrate that it is endowed with a useful recovery property when the environment changes. |
https://proceedings.mlr.press/v202/castellini23a.html | https://proceedings.mlr.press/v202/castellini23a/castellini23a.pdf | https://openreview.net/forum?id=tevbBSzSfK | Scalable Safe Policy Improvement via Monte Carlo Tree Search | https://proceedings.mlr.press/v202/castellini23a.html | Alberto Castellini, Federico Bianchi, Edoardo Zorzi, Thiago D. Simão, Alessandro Farinelli, Matthijs T. J. Spaan | https://proceedings.mlr.press/v202/castellini23a.html | ICML 2023 | Algorithms for safely improving policies are important to deploy reinforcement learning approaches in real-world scenarios. In this work, we propose an algorithm, called MCTS-SPIBB, that computes safe policy improvement online using a Monte Carlo Tree Search based strategy. We theoretically prove that the policy generated by MCTS-SPIBB converges, as the number of simulations grows, to the optimal safely improved policy generated by Safe Policy Improvement with Baseline Bootstrapping (SPIBB), a popular algorithm based on policy iteration. Moreover, our empirical analysis performed on three standard benchmark domains shows that MCTS-SPIBB scales to significantly larger problems than SPIBB because it computes the policy online and locally, i.e., only in the states actually visited by the agent. |
https://proceedings.mlr.press/v202/castiglia23a.html | https://proceedings.mlr.press/v202/castiglia23a/castiglia23a.pdf | https://openreview.net/forum?id=L8iWCxzwl1 | LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning | https://proceedings.mlr.press/v202/castiglia23a.html | Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, Stacy Patterson | https://proceedings.mlr.press/v202/castiglia23a.html | ICML 2023 | We propose LESS-VFL, a communication-efficient feature selection method for distributed systems with vertically partitioned data. We consider a system of a server and several parties with local datasets that share a sample ID space but have different feature sets. The parties wish to collaboratively train a model for a prediction task. As part of the training, the parties wish to remove unimportant features in the system to improve generalization, efficiency, and explainability. In LESS-VFL, after a short pre-training period, the server optimizes its part of the global model to determine the relevant outputs from party models. This information is shared with the parties to then allow local feature selection without communication. We analytically prove that LESS-VFL removes spurious features from model training. We provide extensive empirical evidence that LESS-VFL can achieve high accuracy and remove spurious features at a fraction of the communication cost of other feature selection approaches. |
https://proceedings.mlr.press/v202/catellier23a.html | https://proceedings.mlr.press/v202/catellier23a/catellier23a.pdf | https://openreview.net/forum?id=T4XfQvi0Zo | On the Robustness of Text Vectorizers | https://proceedings.mlr.press/v202/catellier23a.html | Rémi Catellier, Samuel Vaiter, Damien Garreau | https://proceedings.mlr.press/v202/catellier23a.html | ICML 2023 | A fundamental issue in machine learning is the robustness of the model with respect to changes in the input. In natural language processing, models typically contain a first embedding layer, transforming a sequence of tokens into vector representations. While the robustness with respect to changes of continuous inputs is well-understood, the situation is less clear when considering discrete changes, for instance replacing a word by another in an input sentence. Our work formally proves that popular embedding schemes, such as concatenation, TF-IDF, and Paragraph Vector (a.k.a. doc2vec), exhibit robustness in the Hölder or Lipschitz sense with respect to the Hamming distance. We provide quantitative bounds for these schemes and demonstrate how the constants involved are affected by the length of the document. These findings are exemplified through a series of numerical examples. |
https://proceedings.mlr.press/v202/cervino23a.html | https://proceedings.mlr.press/v202/cervino23a/cervino23a.pdf | https://openreview.net/forum?id=4bNGE4WSfJ | Learning Globally Smooth Functions on Manifolds | https://proceedings.mlr.press/v202/cervino23a.html | Juan Cervino, Luiz F. O. Chamon, Benjamin David Haeffele, Rene Vidal, Alejandro Ribeiro | https://proceedings.mlr.press/v202/cervino23a.html | ICML 2023 | Smoothness and low dimensional structures play central roles in improving generalization and stability in learning and statistics. This work combines techniques from semi-infinite constrained learning and manifold regularization to learn representations that are globally smooth on a manifold. To do so, it shows that under typical conditions the problem of learning a Lipschitz continuous function on a manifold is equivalent to a dynamically weighted manifold regularization problem. This observation leads to a practical algorithm based on a weighted Laplacian penalty whose weights are adapted using stochastic gradient techniques. It is shown that under mild conditions, this method estimates the Lipschitz constant of the solution, learning a globally smooth solution as a byproduct. Experiments on real world data illustrate the advantages of the proposed method relative to existing alternatives. Our code is available at https://github.com/JuanCervino/smoothbench. |
https://proceedings.mlr.press/v202/cha23a.html | https://proceedings.mlr.press/v202/cha23a/cha23a.pdf | https://openreview.net/forum?id=3bkRh3ggAE | Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond | https://proceedings.mlr.press/v202/cha23a.html | Jaeyoung Cha, Jaewook Lee, Chulhee Yun | https://proceedings.mlr.press/v202/cha23a.html | ICML 2023 | We study convergence lower bounds of without-replacement stochastic gradient descent (SGD) for solving smooth (strongly-)convex finite-sum minimization problems. Unlike most existing results focusing on final iterate lower bounds in terms of the number of components $n$ and the number of epochs $K$, we seek bounds for arbitrary weighted average iterates that are tight in all factors including the condition number $\kappa$. For SGD with Random Reshuffling, we present lower bounds that have tighter $\kappa$ dependencies than existing bounds. Our results are the first to perfectly close the gap between lower and upper bounds for weighted average iterates in both strongly-convex and convex cases. We also prove weighted average iterate lower bounds for arbitrary permutation-based SGD, which apply to all variants that carefully choose the best permutation. Our bounds improve the existing bounds in factors of $n$ and $\kappa$ and thereby match the upper bounds shown for a recently proposed algorithm called GraB. |
https://proceedings.mlr.press/v202/cha23b.html | https://proceedings.mlr.press/v202/cha23b/cha23b.pdf | https://openreview.net/forum?id=RbKQdVX0Ht | Orthogonality-Enforced Latent Space in Autoencoders: An Approach to Learning Disentangled Representations | https://proceedings.mlr.press/v202/cha23b.html | Jaehoon Cha, Jeyan Thiyagalingam | https://proceedings.mlr.press/v202/cha23b.html | ICML 2023 | Noting the importance of factorizing (or disentangling) the latent space, we propose a novel, non-probabilistic disentangling framework for autoencoders, based on the principles of symmetry transformations that are independent of one another. To the best of our knowledge, this is the first deterministic model that is aiming to achieve disentanglement based on autoencoders using only a reconstruction loss without pairs of images or labels, by explicitly introducing inductive biases into a model architecture through Euler encoding. The proposed model is then compared with a number of state-of-the-art models, relevant to disentanglement, including symmetry-based models and generative models. Our evaluation using six different disentanglement metrics, including the unsupervised disentanglement metric we propose here in this paper, shows that the proposed model can offer better disentanglement, especially when variances of the features are different, where other methods may struggle. We believe that this model opens several opportunities for linear disentangled representation learning based on deterministic autoencoders. |
https://proceedings.mlr.press/v202/chakraborty23a.html | https://proceedings.mlr.press/v202/chakraborty23a/chakraborty23a.pdf | https://openreview.net/forum?id=wbCgv6PdzH | STEERING : Stein Information Directed Exploration for Model-Based Reinforcement Learning | https://proceedings.mlr.press/v202/chakraborty23a.html | Souradip Chakraborty, Amrit Bedi, Alec Koppel, Mengdi Wang, Furong Huang, Dinesh Manocha | https://proceedings.mlr.press/v202/chakraborty23a.html | ICML 2023 | Directed Exploration is a crucial challenge in reinforcement learning (RL), especially when rewards are sparse. Information-directed sampling (IDS), which optimizes the information ratio, seeks to do so by augmenting regret with information gain. However, estimating information gain is computationally intractable or relies on restrictive assumptions which prohibit its use in many practical instances. In this work, we posit an alternative exploration incentive in terms of the integral probability metric (IPM) between a current estimate of the transition model and the unknown optimal, which under suitable conditions, can be computed in closed form with the kernelized Stein discrepancy (KSD). Based on KSD, we develop a novel algorithm STEERING: STEin information dirEcted exploration for model-based Reinforcement LearnING. To enable its derivation, we develop fundamentally new variants of KSD for discrete conditional distributions. We further establish that STEERING archives sublinear Bayesian regret, improving upon prior learning rates of information-augmented MBRL, IDS included. Experimentally, we show that the proposed algorithm is computationally affordable and outperforms several prior approaches. |
https://proceedings.mlr.press/v202/chakraborty23b.html | https://proceedings.mlr.press/v202/chakraborty23b/chakraborty23b.pdf | https://openreview.net/forum?id=ZI4vN6D9Kk | Thompson Sampling for High-Dimensional Sparse Linear Contextual Bandits | https://proceedings.mlr.press/v202/chakraborty23b.html | Sunrit Chakraborty, Saptarshi Roy, Ambuj Tewari | https://proceedings.mlr.press/v202/chakraborty23b.html | ICML 2023 | We consider the stochastic linear contextual bandit problem with high-dimensional features. We analyze the Thompson sampling algorithm using special classes of sparsity-inducing priors (e.g., spike-and-slab) to model the unknown parameter and provide a nearly optimal upper bound on the expected cumulative regret. To the best of our knowledge, this is the first work that provides theoretical guarantees of Thompson sampling in high-dimensional and sparse contextual bandits. For faster computation, we use variational inference instead of Markov Chain Monte Carlo (MCMC) to approximate the posterior distribution. Extensive simulations demonstrate the improved performance of our proposed algorithm over existing ones. |
https://proceedings.mlr.press/v202/chandak23a.html | https://proceedings.mlr.press/v202/chandak23a/chandak23a.pdf | https://openreview.net/forum?id=p9wFuLpp0O | Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition | https://proceedings.mlr.press/v202/chandak23a.html | Yash Chandak, Shantanu Thakoor, Zhaohan Daniel Guo, Yunhao Tang, Remi Munos, Will Dabney, Diana L Borsa | https://proceedings.mlr.press/v202/chandak23a.html | ICML 2023 | Representation learning and exploration are among the key challenges for any deep reinforcement learning agent. In this work, we provide a singular value decomposition based method that can be used to obtain representations that preserve the underlying transition structure in the domain. Perhaps interestingly, we show that these representations also capture the relative frequency of state visitations, thereby providing an estimate for pseudo-counts for free. To scale this decomposition method to large-scale domains, we provide an algorithm that never requires building the transition matrix, can make use of deep networks, and also permits mini-batch training. Further, we draw inspiration from predictive state representations and extend our decomposition method to partially observable environments. With experiments on multi-task settings with partially observable domains, we show that the proposed method can not only learn useful representation on DM-Lab-30 environments (that have inputs involving language instructions, pixel images, rewards, among others) but it can also be effective at hard exploration tasks in DM-Hard-8 environments. |
https://proceedings.mlr.press/v202/chang23a.html | https://proceedings.mlr.press/v202/chang23a/chang23a.pdf | https://openreview.net/forum?id=UxQsrlM6mY | Memory-Based Dual Gaussian Processes for Sequential Learning | https://proceedings.mlr.press/v202/chang23a.html | Paul Edmund Chang, Prakhar Verma, S. T. John, Arno Solin, Mohammad Emtiyaz Khan | https://proceedings.mlr.press/v202/chang23a.html | ICML 2023 | Sequential learning with Gaussian processes (GPs) is challenging when access to past data is limited, for example, in continual and active learning. In such cases, errors can accumulate over time due to inaccuracies in the posterior, hyperparameters, and inducing points, making accurate learning challenging. Here, we present a method to keep all such errors in check using the recently proposed dual sparse variational GP. Our method enables accurate inference for generic likelihoods and improves learning by actively building and updating a memory of past data. We demonstrate its effectiveness in several applications involving Bayesian optimization, active learning, and continual learning. |
https://proceedings.mlr.press/v202/chang23b.html | https://proceedings.mlr.press/v202/chang23b/chang23b.pdf | https://openreview.net/forum?id=hi9UssZdHR | Muse: Text-To-Image Generation via Masked Generative Transformers | https://proceedings.mlr.press/v202/chang23b.html | Huiwen Chang, Han Zhang, Jarred Barber, Aaron Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Patrick Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, Dilip Krishnan | https://proceedings.mlr.press/v202/chang23b.html | ICML 2023 | We present Muse, a text-to-image Transformermodel that achieves state-of-the-art image genera-tion performance while being significantly moreefficient than diffusion or autoregressive models.Muse is trained on a masked modeling task indiscrete token space: given the text embeddingextracted from a pre-trained large language model(LLM), Muse learns to predict randomly maskedimage tokens. Compared to pixel-space diffusionmodels, such as Imagen and DALL-E 2, Muse issignificantly more efficient due to the use of dis-crete tokens and requires fewer sampling itera-tions; compared to autoregressive models such asParti, Muse is more efficient due to the use of par-allel decoding. The use of a pre-trained LLM en-ables fine-grained language understanding, whichtranslates to high-fidelity image generation andthe understanding of visual concepts such as ob-jects, their spatial relationships, pose, cardinalityetc. Our 900M parameter model achieves a newSOTA on CC3M, with an FID score of 6.06. TheMuse 3B parameter model achieves an FID of7.88 on zero-shot COCO evaluation, along with aCLIP score of 0.32. Muse also directly enables anumber of image editing applications without theneed to fine-tune or invert the model: inpainting,outpainting, and mask-free editing. More resultsand videos demonstrating editing are available at https://muse-icml.github.io/ |
https://proceedings.mlr.press/v202/chao23a.html | https://proceedings.mlr.press/v202/chao23a/chao23a.pdf | https://openreview.net/forum?id=I6eJvWRrFa | On Investigating the Conservative Property of Score-Based Generative Models | https://proceedings.mlr.press/v202/chao23a.html | Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Chun-Yi Lee | https://proceedings.mlr.press/v202/chao23a.html | ICML 2023 | Existing Score-Based Models (SBMs) can be categorized into constrained SBMs (CSBMs) or unconstrained SBMs (USBMs) according to their parameterization approaches. CSBMs model probability density functions as Boltzmann distributions, and assign their predictions as the negative gradients of some scalar-valued energy functions. On the other hand, USBMs employ flexible architectures capable of directly estimating scores without the need to explicitly model energy functions. In this paper, we demonstrate that the architectural constraints of CSBMs may limit their modeling ability. In addition, we show that USBMs’ inability to preserve the property of conservativeness may lead to degraded performance in practice. To address the above issues, we propose Quasi-Conservative Score-Based Models (QCSBMs) for keeping the advantages of both CSBMs and USBMs. Our theoretical derivations demonstrate that the training objective of QCSBMs can be efficiently integrated into the training processes by leveraging the Hutchinson’s trace estimator. In addition, our experimental results on the CIFAR-10, CIFAR-100, ImageNet, and SVHN datasets validate the effectiveness of QCSBMs. Finally, we justify the advantage of QCSBMs using an example of a one-layered autoencoder. |
https://proceedings.mlr.press/v202/charisopoulos23a.html | https://proceedings.mlr.press/v202/charisopoulos23a/charisopoulos23a.pdf | https://openreview.net/forum?id=r3M5cBtpYq | Robust and private stochastic linear bandits | https://proceedings.mlr.press/v202/charisopoulos23a.html | Vasileios Charisopoulos, Hossein Esfandiari, Vahab Mirrokni | https://proceedings.mlr.press/v202/charisopoulos23a.html | ICML 2023 | In this paper, we study the stochastic linear bandit problem under the additional requirements of differential privacy, robustness and batched observations. In particular, we assume an adversary randomly chooses a constant fraction of the observed rewards in each batch, replacing them with arbitrary numbers. We present differentially private and robust variants of the arm elimination algorithm using logarithmic batch queries under two privacy models and provide regret bounds in both settings. In the first model, every reward in each round is reported by a potentially different client, which reduces to standard local differential privacy (LDP). In the second model, every action is "owned" by a different client, who may aggregate the rewards over multiple queries and privatize the aggregate response instead. To the best of our knowledge, our algorithms are the first simultaneously providing differential privacy and adversarial robustness in the stochastic linear bandits problem. |
https://proceedings.mlr.press/v202/chaturvedi23a.html | https://proceedings.mlr.press/v202/chaturvedi23a/chaturvedi23a.pdf | https://openreview.net/forum?id=MZfoP1FimF | Streaming Submodular Maximization with Differential Privacy | https://proceedings.mlr.press/v202/chaturvedi23a.html | Anamay Chaturvedi, Huy Nguyen, Thy Dinh Nguyen | https://proceedings.mlr.press/v202/chaturvedi23a.html | ICML 2023 | In this work, we study the problem of privately maximizing a submodular function in the streaming setting. Extensive work has been done on privately maximizing submodular functions in the general case when the function depends upon the private data of individuals. However, when the size of the data stream drawn from the domain of the objective function is large or arrives very fast, one must privately optimize the objective within the constraints of the streaming setting. We establish fundamental differentially private baselines for this problem and then derive better trade-offs between privacy and utility for the special case of decomposable submodular functions. A submodular function is decomposable when it can be written as a sum of submodular functions; this structure arises naturally when each summand function models the utility of an individual and the goal is to study the total utility of the whole population as in the well-known Combinatorial Public Projects Problem. Finally, we complement our theoretical analysis with experimental corroboration. |
https://proceedings.mlr.press/v202/chaudhuri23a.html | https://proceedings.mlr.press/v202/chaudhuri23a/chaudhuri23a.pdf | https://openreview.net/forum?id=b2GYLlhH4a | Why does Throwing Away Data Improve Worst-Group Error? | https://proceedings.mlr.press/v202/chaudhuri23a.html | Kamalika Chaudhuri, Kartik Ahuja, Martin Arjovsky, David Lopez-Paz | https://proceedings.mlr.press/v202/chaudhuri23a.html | ICML 2023 | When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results. They throw away examples until the classes or groups are balanced in size, and then perform empirical risk minimization on the reduced training set. This opposes common wisdom in learning theory, where the expected error is supposed to decrease as the dataset grows in size. In this work, we leverage extreme value theory to address this apparent contradiction. Our results show that the tails of the data distribution play an important role in determining the worst-group-accuracy of linear classifiers. When learning on data with heavy tails, throwing away data restores the geometric symmetry of the resulting classifier, and therefore improves its worst-group generalization. |
https://proceedings.mlr.press/v202/chawla23a.html | https://proceedings.mlr.press/v202/chawla23a/chawla23a.pdf | https://openreview.net/forum?id=RUiWyj6fhN | Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits | https://proceedings.mlr.press/v202/chawla23a.html | Ronshee Chawla, Daniel Vial, Sanjay Shakkottai, R. Srikant | https://proceedings.mlr.press/v202/chawla23a.html | ICML 2023 | The study of collaborative multi-agent bandits has attracted significant attention recently. In light of this, we initiate the study of a new collaborative setting, consisting of $N$ agents such that each agent is learning one of $M$ stochastic multi-armed bandits to minimize their group cumulative regret. We develop decentralized algorithms which facilitate collaboration between the agents under two scenarios. We characterize the performance of these algorithms by deriving the per agent cumulative regret and group regret upper bounds. We also prove lower bounds for the group regret in this setting, which demonstrates the near-optimal behavior of the proposed algorithms. |
https://proceedings.mlr.press/v202/che23a.html | https://proceedings.mlr.press/v202/che23a/che23a.pdf | https://openreview.net/forum?id=GB0TdALWGw | Correcting discount-factor mismatch in on-policy policy gradient methods | https://proceedings.mlr.press/v202/che23a.html | Fengdi Che, Gautham Vasan, A. Rupam Mahmood | https://proceedings.mlr.press/v202/che23a.html | ICML 2023 | The policy gradient theorem gives a convenient form of the policy gradient in terms of three factors: an action value, a gradient of the action likelihood, and a state distribution involving discounting called the discounted stationary distribution. But commonly used on-policy methods based on the policy gradient theorem ignores the discount factor in the state distribution, which is technically incorrect and may even cause degenerate learning behavior in some environments. An existing solution corrects this discrepancy by using $\gamma^t$ as a factor in the gradient estimate. However, this solution is not widely adopted and does not work well in tasks where the later states are similar to earlier states. We introduce a novel distribution correction to account for the discounted stationary distribution that can be plugged into many existing gradient estimators. Our correction circumvents the performance degradation associated with the $\gamma^t$ correction with a lower variance. Importantly, compared to the uncorrected estimators, our algorithm provides improved state emphasis to evade suboptimal policies in certain environments and consistently matches or exceeds the original performance on several OpenAI gym and DeepMind suite benchmarks. |
https://proceedings.mlr.press/v202/che23b.html | https://proceedings.mlr.press/v202/che23b/che23b.pdf | https://openreview.net/forum?id=6wQKmKiDHw | Fast Federated Machine Unlearning with Nonlinear Functional Theory | https://proceedings.mlr.press/v202/che23b.html | Tianshi Che, Yang Zhou, Zijie Zhang, Lingjuan Lyu, Ji Liu, Da Yan, Dejing Dou, Jun Huan | https://proceedings.mlr.press/v202/che23b.html | ICML 2023 | Federated machine unlearning (FMU) aims to remove the influence of a specified subset of training data upon request from a trained federated learning model. Despite achieving remarkable performance, existing FMU techniques suffer from inefficiency due to two sequential operations of training and retraining/unlearning on large-scale datasets. Our prior study, PCMU, was proposed to improve the efficiency of centralized machine unlearning (CMU) with certified guarantees, by simultaneously executing the training and unlearning operations. This paper proposes a fast FMU algorithm, FFMU, for improving the FMU efficiency while maintaining the unlearning quality. The PCMU method is leveraged to train a local machine learning (MU) model on each edge device. We propose to employ nonlinear functional analysis techniques to refine the local MU models as output functions of a Nemytskii operator. We conduct theoretical analysis to derive that the Nemytskii operator has a global Lipschitz constant, which allows us to bound the difference between two MU models regarding the distance between their gradients. Based on the Nemytskii operator and average smooth local gradients, the global MU model on the server is guaranteed to achieve close performance to each local MU model with the certified guarantees. |
https://proceedings.mlr.press/v202/cheikhi23a.html | https://proceedings.mlr.press/v202/cheikhi23a/cheikhi23a.pdf | https://openreview.net/forum?id=mjYZd6SgZS | On the Statistical Benefits of Temporal Difference Learning | https://proceedings.mlr.press/v202/cheikhi23a.html | David Cheikhi, Daniel Russo | https://proceedings.mlr.press/v202/cheikhi23a.html | ICML 2023 | Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD’s errors are bounded in terms of a novel measure – the problem’s trajectory crossing time – which can be much smaller than the problem’s time horizon. |
https://proceedings.mlr.press/v202/chen23a.html | https://proceedings.mlr.press/v202/chen23a/chen23a.pdf | https://openreview.net/forum?id=ZMvv6laV5b | Multi-Layer Neural Networks as Trainable Ladders of Hilbert Spaces | https://proceedings.mlr.press/v202/chen23a.html | Zhengdao Chen | https://proceedings.mlr.press/v202/chen23a.html | ICML 2023 | To characterize the functions spaces explored by multi-layer neural networks (NNs), we introduce Neural Hilbert Ladders (NHLs), a collection of reproducing kernel Hilbert spaces (RKHSes) that are defined iteratively and adaptive to training. First, we prove a correspondence between functions expressed by L-layer NNs and those belonging to L-level NHLs. Second, we prove generalization guarantees for learning the NHL based on a new complexity measure. Third, corresponding to the training of multi-layer NNs in the infinite-width mean-field limit, we derive an evolution of the NHL characterized by the dynamics of multiple random fields. Finally, we examine linear and shallow NNs from the new perspective and complement the theory with numerical results. |
https://proceedings.mlr.press/v202/chen23b.html | https://proceedings.mlr.press/v202/chen23b/chen23b.pdf | https://openreview.net/forum?id=AvwlrX9AQr | Beyond the Edge of Stability via Two-step Gradient Updates | https://proceedings.mlr.press/v202/chen23b.html | Lei Chen, Joan Bruna | https://proceedings.mlr.press/v202/chen23b.html | ICML 2023 | Gradient Descent (GD) is a powerful workhorse of modern machine learning thanks to its scalability and efficiency in high-dimensional spaces. Its ability to find local minimisers is only guaranteed for losses with Lipschitz gradients, where it can be seen as a ’bona-fide’ discretisation of an underlying gradient flow. Yet, many ML setups involving overparametrised models do not fall into this problem class, which has motivated research beyond the so-called ”Edge of Stability” (EoS), where the step-size crosses the admissibility threshold inversely proportional to the Lipschitz constant above. Perhaps surprisingly, GD has been empirically observed to still converge regardless of local instability and oscillatory behavior. The incipient theoretical analysis of this phenomena has mainly focused in the overparametrised regime, where the effect of choosing a large learning rate may be associated to a ‘Sharpness-Minimisation’ implicit regularisation within the manifold of minimisers, under appropriate asymptotic limits. In contrast, in this work we directly examine the conditions for such unstable convergence, focusing on simple, yet representative, learning problems, via analysis of two-step gradient updates. Specifically, we characterize a local condition involving third-order derivatives that guarantees existence and convergence to fixed points of the two-step updates, and leverage such property in a teacher-student setting, under population loss. Finally, starting from Matrix Factorization, we provide observations of period-2 orbit of GD in high-dimensional settings with intuition of its dynamics, along with exploration into more general settings. |
https://proceedings.mlr.press/v202/chen23c.html | https://proceedings.mlr.press/v202/chen23c/chen23c.pdf | https://openreview.net/forum?id=0yNmeyteuS | Trompt: Towards a Better Deep Neural Network for Tabular Data | https://proceedings.mlr.press/v202/chen23c.html | Kuan-Yu Chen, Ping-Han Chiang, Hsin-Rung Chou, Ting-Wei Chen, Tien-Hao Chang | https://proceedings.mlr.press/v202/chen23c.html | ICML 2023 | Tabular data is arguably one of the most commonly used data structures in various practical domains, including finance, healthcare and e-commerce. The inherent heterogeneity allows tabular data to store rich information. However, based on a recently published tabular benchmark, we can see deep neural networks still fall behind tree-based models on tabular datasets. In this paper, we propose Trompt–which stands for Tabular Prompt–a novel architecture inspired by prompt learning of language models. The essence of prompt learning is to adjust a large pre-trained model through a set of prompts outside the model without directly modifying the model. Based on this idea, Trompt separates the learning strategy of tabular data into two parts. The first part, analogous to pre-trained models, focus on learning the intrinsic information of a table. The second part, analogous to prompts, focus on learning the variations among samples. Trompt is evaluated with the benchmark mentioned above. The experimental results demonstrate that Trompt outperforms state-of-the-art deep neural networks and is comparable to tree-based models. |
https://proceedings.mlr.press/v202/chen23d.html | https://proceedings.mlr.press/v202/chen23d/chen23d.pdf | https://openreview.net/forum?id=MX4LDCq9iS | Differentially Private Stochastic Convex Optimization under a Quantile Loss Function | https://proceedings.mlr.press/v202/chen23d.html | Du Chen, Geoffrey A. Chua | https://proceedings.mlr.press/v202/chen23d.html | ICML 2023 | We study $(\varepsilon,\delta)$-differentially private (DP) stochastic convex optimization under an $r$-th quantile loss function taking the form $c(u) = ru^+ + (1-r)(-u)^+$. The function is non-smooth, and we propose to approximate it with a smooth function obtained by convolution smoothing, which enjoys both structure and bandwidth flexibility and can address outliers. This leads to a better approximation than those obtained from existing methods such as Moreau Envelope. We then design private algorithms based on DP stochastic gradient descent and objective perturbation, and show that both algorithms achieve (near) optimal excess generalization risk $O(\max\{\frac{1}{\sqrt{n}}, \frac{\sqrt{d\ln(1/\delta)}}{n\varepsilon}\})$. Through objective perturbation, we further derive an upper bound $O(\max\{\sqrt{\frac{d}{n}}, \sqrt{\frac{d\ln(1/\delta)}{n\varepsilon}}\})$ on the parameter estimation error under mild assumptions on data generating processes. Some applications in private quantile regression and private inventory control will be discussed. |
https://proceedings.mlr.press/v202/chen23e.html | https://proceedings.mlr.press/v202/chen23e/chen23e.pdf | https://openreview.net/forum?id=GOUgXuLahg | Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic Analysis For DDIM-type Samplers | https://proceedings.mlr.press/v202/chen23e.html | Sitan Chen, Giannis Daras, Alex Dimakis | https://proceedings.mlr.press/v202/chen23e.html | ICML 2023 | We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling. Several recent works have analyzed stochastic samplers using tools like Girsanov’s theorem and a chain rule variant of the interpolation argument. Unfortunately, these techniques give vacuous bounds when applied to deterministic samplers. We give a new operational interpretation for deterministic sampling by showing that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs gradient ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current iterate. This perspective allows us to extend denoising diffusion implicit models to general, non-linear forward processes. We then develop the first polynomial convergence bounds for these samplers under mild conditions on the data distribution. |
https://proceedings.mlr.press/v202/chen23f.html | https://proceedings.mlr.press/v202/chen23f/chen23f.pdf | https://openreview.net/forum?id=HRmSGZZ1FY | Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation | https://proceedings.mlr.press/v202/chen23f.html | Yu Chen, Wei Deng, Shikai Fang, Fengpei Li, Nicole Tianjiao Yang, Yikai Zhang, Kashif Rasul, Shandian Zhe, Anderson Schneider, Yuriy Nevmyvaka | https://proceedings.mlr.press/v202/chen23f.html | ICML 2023 | The Schrödinger bridge problem (SBP) is gaining increasing attention in generative modeling and showing promising potential even in comparison with the score-based generative models (SGMs). SBP can be interpreted as an entropy-regularized optimal transport problem, which conducts projections onto every other marginal alternatingly. However, in practice, only approximated projections are accessible and their convergence is not well understood. To fill this gap, we present a first convergence analysis of the Schrödinger bridge algorithm based on approximated projections. As for its practical applications, we apply SBP to probabilistic time series imputation by generating missing values conditioned on observed data. We show that optimizing the transport cost improves the performance and the proposed algorithm achieves the state-of-the-art result in healthcare and environmental data while exhibiting the advantage of exploring both temporal and feature patterns in probabilistic time series imputation. |
https://proceedings.mlr.press/v202/chen23g.html | https://proceedings.mlr.press/v202/chen23g/chen23g.pdf | https://openreview.net/forum?id=CZxFOb5azq | ED-Batch: Efficient Automatic Batching of Dynamic Neural Networks via Learned Finite State Machines | https://proceedings.mlr.press/v202/chen23g.html | Siyuan Chen, Pratik Pramod Fegade, Tianqi Chen, Phillip Gibbons, Todd Mowry | https://proceedings.mlr.press/v202/chen23g.html | ICML 2023 | Batching has a fundamental influence on the efficiency of deep neural network (DNN) execution. However, for dynamic DNNs, efficient batching is particularly challenging as the dataflow graph varies per input instance. As a result, state-of-the-art frameworks use heuristics that result in suboptimal batching decisions. Further, batching puts strict restrictions on memory adjacency and can lead to high data movement costs. In this paper, we provide an approach for batching dynamic DNNs based on finite state machines, which enables the automatic discovery of batching policies specialized for each DNN via reinforcement learning. Moreover, we find that memory planning that is aware of the batching policy can save significant data movement overheads, which is automated by a PQ tree-based algorithm we introduce. Experimental results show that our framework speeds up state-of-the-art frameworks by on average 1.15x, 1.39x, and 2.45x for chain-based, tree-based, and lattice-based DNNs across CPU and GPU. The framework is open-sourced at https://github.com/gulang2019/ED-Batch.git. |
https://proceedings.mlr.press/v202/chen23h.html | https://proceedings.mlr.press/v202/chen23h/chen23h.pdf | https://openreview.net/forum?id=jjzJ768iV1 | Is Learning Summary Statistics Necessary for Likelihood-free Inference? | https://proceedings.mlr.press/v202/chen23h.html | Yanzhi Chen, Michael U. Gutmann, Adrian Weller | https://proceedings.mlr.press/v202/chen23h.html | ICML 2023 | Likelihood-free inference (LFI) is a set of techniques for inference in implicit statistical models. A longstanding question in LFI has been how to design or learn good summary statistics of data, but this might now seem unnecessary due to the advent of recent end-to-end (i.e. neural network-based) LFI methods. In this work, we rethink this question with a new method for learning summary statistics. We show that learning sufficient statistics may be easier than direct posterior inference, as the former problem can be reduced to a set of low-dimensional, easy-to-solve learning problems. This suggests us to explicitly decouple summary statistics learning from posterior inference in LFI. Experiments on diverse inference tasks with different data types validate our hypothesis. |
https://proceedings.mlr.press/v202/chen23i.html | https://proceedings.mlr.press/v202/chen23i/chen23i.pdf | https://openreview.net/forum?id=4YYgtY1APK | Subequivariant Graph Reinforcement Learning in 3D Environments | https://proceedings.mlr.press/v202/chen23i.html | Runfa Chen, Jiaqi Han, Fuchun Sun, Wenbing Huang | https://proceedings.mlr.press/v202/chen23i.html | ICML 2023 | Learning a shared policy that guides the locomotion of different agents is of core interest in Reinforcement Learning (RL), which leads to the study of morphology-agnostic RL. However, existing benchmarks are highly restrictive in the choice of starting point and target point, constraining the movement of the agents within 2D space. In this work, we propose a novel setup for morphology-agnostic RL, dubbed Subequivariant Graph RL in 3D environments (3D-SGRL). Specifically, we first introduce a new set of more practical yet challenging benchmarks in 3D space that allows the agent to have full Degree-of-Freedoms to explore in arbitrary directions starting from arbitrary configurations. Moreover, to optimize the policy over the enlarged state-action space, we propose to inject geometric symmetry, i.e., subequivariance, into the modeling of the policy and Q-function such that the policy can generalize to all directions, improving exploration efficiency. This goal is achieved by a novel SubEquivariant Transformer (SET) that permits expressive message exchange. Finally, we evaluate the proposed method on the proposed benchmarks, where our method consistently and significantly outperforms existing approaches on single-task, multi-task, and zero-shot generalization scenarios. Extensive ablations are also conducted to verify our design. |
https://proceedings.mlr.press/v202/chen23j.html | https://proceedings.mlr.press/v202/chen23j/chen23j.pdf | https://openreview.net/forum?id=iASUTBGw07 | GuardHFL: Privacy Guardian for Heterogeneous Federated Learning | https://proceedings.mlr.press/v202/chen23j.html | Hanxiao Chen, Meng Hao, Hongwei Li, Kangjie Chen, Guowen Xu, Tianwei Zhang, Xilin Zhang | https://proceedings.mlr.press/v202/chen23j.html | ICML 2023 | Heterogeneous federated learning (HFL) enables clients with different computation and communication capabilities to collaboratively train their own customized models via a query-response paradigm on auxiliary datasets. However, such a paradigm raises serious privacy concerns due to the leakage of highly sensitive query samples and response predictions. We put forth GuardHFL, the first-of-its-kind efficient and privacy-preserving HFL framework. GuardHFL is equipped with a novel HFL-friendly secure querying scheme built on lightweight secret sharing and symmetric-key techniques. The core of GuardHFL is two customized multiplication and comparison protocols, which substantially boost the execution efficiency. Extensive evaluations demonstrate that GuardHFL significantly outperforms the alternative instantiations based on existing state-of-the-art techniques in both runtime and communication cost. |
https://proceedings.mlr.press/v202/chen23k.html | https://proceedings.mlr.press/v202/chen23k/chen23k.pdf | https://openreview.net/forum?id=vn9O1N5ZOw | Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling | https://proceedings.mlr.press/v202/chen23k.html | Xiaohui Chen, Jiaxing He, Xu Han, Liping Liu | https://proceedings.mlr.press/v202/chen23k.html | ICML 2023 | Diffusion-based generative graph models have been proven effective in generating high-quality small graphs. However, they need to be more scalable for generating large graphs containing thousands of nodes desiring graph statistics. In this work, we propose EDGE, a new diffusion-based generative graph model that addresses generative tasks with large graphs. To improve computation efficiency, we encourage graph sparsity by using a discrete diffusion process that randomly removes edges at each time step and finally obtains an empty graph. EDGE only focuses on a portion of nodes in the graph at each denoising step. It makes much fewer edge predictions than previous diffusion-based models. Moreover, EDGE admits explicitly modeling the node degrees of the graphs, further improving the model performance. The empirical study shows that EDGE is much more efficient than competing methods and can generate large graphs with thousands of nodes. It also outperforms baseline models in generation quality: graphs generated by our approach have more similar graph statistics to those of the training graphs. |
https://proceedings.mlr.press/v202/chen23l.html | https://proceedings.mlr.press/v202/chen23l/chen23l.pdf | https://openreview.net/forum?id=bC1OiuLO4N | Evolving Semantic Prototype Improves Generative Zero-Shot Learning | https://proceedings.mlr.press/v202/chen23l.html | Shiming Chen, Wenjin Hou, Ziming Hong, Xiaohan Ding, Yibing Song, Xinge You, Tongliang Liu, Kun Zhang | https://proceedings.mlr.press/v202/chen23l.html | ICML 2023 | In zero-shot learning (ZSL), generative methods synthesize class-related sample features based on predefined semantic prototypes. They advance the ZSL performance by synthesizing unseen class sample features for better training the classifier. We observe that each class’s predefined semantic prototype (also referred to as semantic embedding or condition) does not accurately match its real semantic prototype. So the synthesized visual sample features do not faithfully represent the real sample features, limiting the classifier training and existing ZSL performance. In this paper, we formulate this mismatch phenomenon as the visual-semantic domain shift problem. We propose a dynamic semantic prototype evolving (DSP) method to align the empirically predefined semantic prototypes and the real prototypes for class-related feature synthesis. The alignment is learned by refining sample features and semantic prototypes in a unified framework and making the synthesized visual sample features approach real sample features. After alignment, synthesized sample features from unseen classes are closer to the real sample features and benefit DSP to improve existing generative ZSL methods by 8.5%, 8.0%, and 9.7% on the standard CUB, SUN AWA2 datasets, the significant performance improvement indicates that evolving semantic prototype explores a virgin field in ZSL. |
https://proceedings.mlr.press/v202/chen23m.html | https://proceedings.mlr.press/v202/chen23m/chen23m.pdf | https://openreview.net/forum?id=IgpMs357b5 | Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization | https://proceedings.mlr.press/v202/chen23m.html | Yimeng Chen, Tianyang Hu, Fengwei Zhou, Zhenguo Li, Zhi-Ming Ma | https://proceedings.mlr.press/v202/chen23m.html | ICML 2023 | The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve out-of-distribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions: diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced out-of-distribution generalization performance. Our proposed method demonstrates state-of-the-art empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge. |
https://proceedings.mlr.press/v202/chen23n.html | https://proceedings.mlr.press/v202/chen23n/chen23n.pdf | https://openreview.net/forum?id=kwbi5BmlKd | Decentralized Stochastic Bilevel Optimization with Improved per-Iteration Complexity | https://proceedings.mlr.press/v202/chen23n.html | Xuxing Chen, Minhui Huang, Shiqian Ma, Krishna Balasubramanian | https://proceedings.mlr.press/v202/chen23n.html | ICML 2023 | Bilevel optimization recently has received tremendous attention due to its great success in solving important machine learning problems like meta learning, reinforcement learning, and hyperparameter optimization. Extending single-agent training on bilevel problems to the decentralized setting is a natural generalization, and there has been a flurry of work studying decentralized bilevel optimization algorithms. However, it remains unknown how to design the distributed algorithm with sample complexity and convergence rate comparable to SGD for stochastic optimization, and at the same time without directly computing the exact Hessian or Jacobian matrices. In this paper we propose such an algorithm. More specifically, we propose a novel decentralized stochastic bilevel optimization (DSBO) algorithm that only requires first order stochastic oracle, Hessian-vector product and Jacobian-vector product oracle. The sample complexity of our algorithm matches the currently best known results for DSBO, while our algorithm does not require estimating the full Hessian and Jacobian matrices, thereby possessing to improved per-iteration complexity. |
https://proceedings.mlr.press/v202/chen23o.html | https://proceedings.mlr.press/v202/chen23o/chen23o.pdf | https://openreview.net/forum?id=KB4mLiuoEX | Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data | https://proceedings.mlr.press/v202/chen23o.html | Minshuo Chen, Kaixuan Huang, Tuo Zhao, Mengdi Wang | https://proceedings.mlr.press/v202/chen23o.html | ICML 2023 | Diffusion models achieve state-of-the-art performance in various generation tasks. However, their theoretical foundations fall far behind. This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace. Our result provides sample complexity bounds for distribution estimation using diffusion models. We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated. Further, the generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution. The convergence rate depends on subspace dimension, implying that diffusion models can circumvent the curse of data ambient dimensionality. |
https://proceedings.mlr.press/v202/chen23p.html | https://proceedings.mlr.press/v202/chen23p/chen23p.pdf | https://openreview.net/forum?id=lZUSxrYoOY | Sample Complexity of Probability Divergences under Group Symmetry | https://proceedings.mlr.press/v202/chen23p.html | Ziyu Chen, Markos Katsoulakis, Luc Rey-Bellet, Wei Zhu | https://proceedings.mlr.press/v202/chen23p.html | ICML 2023 | We rigorously quantify the improvement in the sample complexity of variational divergence estimations for group-invariant distributions. In the cases of the Wasserstein-1 metric and the Lipschitz-regularized $\alpha$-divergences, the reduction of sample complexity is proportional to an ambient-dimension-dependent power of the group size. For the maximum mean discrepancy (MMD), the improvement of sample complexity is more nuanced, as it depends on not only the group size but also the choice of kernel. Numerical simulations verify our theories. |
https://proceedings.mlr.press/v202/chen23q.html | https://proceedings.mlr.press/v202/chen23q/chen23q.pdf | https://openreview.net/forum?id=wi7T6VhNk2 | Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions | https://proceedings.mlr.press/v202/chen23q.html | Hongrui Chen, Holden Lee, Jianfeng Lu | https://proceedings.mlr.press/v202/chen23q.html | ICML 2023 | We give an improved theoretical analysis of score-based generative modeling. Under a score estimate with small $L^2$ error (averaged across timesteps), we provide efficient convergence guarantees for any data distribution with second-order moment, by either employing early stopping or assuming smoothness condition on the score function of the data distribution. Our result does not rely on any log-concavity or functional inequality assumption and has a logarithmic dependence on the smoothness. In particular, we show that under only a finite second moment condition, approximating the following in reverse KL divergence in $\epsilon$-accuracy can be done in $\tilde O\left(\frac{d \log (1/\delta)}{\epsilon}\right)$ steps: 1) the variance-$\delta$ Gaussian perturbation of any data distribution; 2) data distributions with $1/\delta$-smooth score functions. Our analysis also provides a quantitative comparison between different discrete approximations and may guide the choice of discretization points in practice. |
https://proceedings.mlr.press/v202/chen23r.html | https://proceedings.mlr.press/v202/chen23r/chen23r.pdf | https://openreview.net/forum?id=AoYswkQ0Xf | Bidirectional Looking with A Novel Double Exponential Moving Average to Adaptive and Non-adaptive Momentum Optimizers | https://proceedings.mlr.press/v202/chen23r.html | Yineng Chen, Zuchao Li, Lefei Zhang, Bo Du, Hai Zhao | https://proceedings.mlr.press/v202/chen23r.html | ICML 2023 | Optimizer is an essential component for the success of deep learning, which guides the neural network to update the parameters according to the loss on the training set. SGD and Adam are two classical and effective optimizers on which researchers have proposed many variants, such as SGDM and RAdam. In this paper, we innovatively combine the backward-looking and forward-looking aspects of the optimizer algorithm and propose a novel Admeta (A Double exponential Moving averagE To Adaptive and non-adaptive momentum) optimizer framework. For backward-looking part, we propose a DEMA variant scheme, which is motivated by a metric in the stock market, to replace the common exponential moving average scheme. While in the forward-looking part, we present a dynamic lookahead strategy which asymptotically approaches a set value, maintaining its speed at early stage and high convergence performance at final stage. Based on this idea, we provide two optimizer implementations, AdmetaR and AdmetaS, the former based on RAdam and the latter based on SGDM. Through extensive experiments on diverse tasks, we find that the proposed Admeta optimizer outperforms our base optimizers and shows advantages over recently proposed competitive optimizers. We also provide theoretical proof of these two algorithms, which verifies the convergence of our proposed Admeta. |
https://proceedings.mlr.press/v202/chen23s.html | https://proceedings.mlr.press/v202/chen23s/chen23s.pdf | https://openreview.net/forum?id=p9vOr0rdbs | HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation | https://proceedings.mlr.press/v202/chen23s.html | Lu Chen, Siyu Lou, Keyan Zhang, Jin Huang, Quanshi Zhang | https://proceedings.mlr.press/v202/chen23s.html | ICML 2023 | The Shapley value is widely regarded as a trustworthy attribution metric. However, when people use Shapley values to explain the attribution of input variables of a deep neural network (DNN), it usually requires a very high computational cost to approximate relatively accurate Shapley values in real-world applications. Therefore, we propose a novel network architecture, the HarsanyiNet, which makes inferences on the input sample and simultaneously computes the exact Shapley values of the input variables in a single forward propagation. The HarsanyiNet is designed on the theoretical foundation that the Shapley value can be reformulated as the redistribution of Harsanyi interactions encoded by the network. |
https://proceedings.mlr.press/v202/chen23t.html | https://proceedings.mlr.press/v202/chen23t/chen23t.pdf | https://openreview.net/forum?id=ao0KFFEMwT | Generalized Implicit Follow-The-Regularized-Leader | https://proceedings.mlr.press/v202/chen23t.html | Keyi Chen, Francesco Orabona | https://proceedings.mlr.press/v202/chen23t.html | ICML 2023 | We propose a new class of online learning algorithms, generalized implicit Follow-The-Regularized-Leader (FTRL), that expands the scope of FTRL framework. Generalized implicit FTRL can recover known algorithms, such as FTRL with linearized losses and implicit FTRL, and it allows the design of new update rules, as extensions of aProx and Mirror-Prox to FTRL. Our theory is constructive in the sense that it provides a simple unifying framework to design updates that directly improve the worst-case upper bound on the regret. The key idea is substituting the linearization of the losses with a Fenchel-Young inequality. We show the flexibility of the framework by proving that some known algorithms, like the Mirror-Prox updates, are instantiations of the generalized implicit FTRL. Finally, the new framework allows us to recover the temporal variation bound of implicit OMD, with the same computational complexity. |
https://proceedings.mlr.press/v202/chen23u.html | https://proceedings.mlr.press/v202/chen23u/chen23u.pdf | https://openreview.net/forum?id=VLs7ZmctMf | Fisher Information Embedding for Node and Graph Learning | https://proceedings.mlr.press/v202/chen23u.html | Dexiong Chen, Paolo Pellizzoni, Karsten Borgwardt | https://proceedings.mlr.press/v202/chen23u.html | ICML 2023 | Attention-based graph neural networks (GNNs), such as graph attention networks (GATs), have become popular neural architectures for processing graph-structured data and learning node embeddings. Despite their empirical success, these models rely on labeled data and the theoretical properties of these models have yet to be fully understood. In this work, we propose a novel attention-based node embedding framework for graphs. Our framework builds upon a hierarchical kernel for multisets of subgraphs around nodes (e.g. neighborhoods) and each kernel leverages the geometry of a smooth statistical manifold to compare pairs of multisets, by “projecting” the multisets onto the manifold. By explicitly computing node embeddings with a manifold of Gaussian mixtures, our method leads to a new attention mechanism for neighborhood aggregation. We provide theoretical insights into generalizability and expressivity of our embeddings, contributing to a deeper understanding of attention-based GNNs. We propose both efficient unsupervised and supervised methods for learning the embeddings. Through experiments on several node classification benchmarks, we demonstrate that our proposed method outperforms existing attention-based graph models like GATs. Our code is available at https://github.com/BorgwardtLab/fisher_information_embedding. |
https://proceedings.mlr.press/v202/chen23v.html | https://proceedings.mlr.press/v202/chen23v/chen23v.pdf | https://openreview.net/forum?id=l3sdNQdmQh | Rethinking Visual Reconstruction: Experience-Based Content Completion Guided by Visual Cues | https://proceedings.mlr.press/v202/chen23v.html | Jiaxuan Chen, Yu Qi, Gang Pan | https://proceedings.mlr.press/v202/chen23v.html | ICML 2023 | Decoding seen images from brain activities has been an absorbing field. However, the reconstructed images still suffer from low quality with existing studies. This can be because our visual system is not like a camera that ”remembers” every pixel. Instead, only part of the information can be perceived with our selective attention, and the brain ”guesses” the rest to form what we think we see. Most existing approaches ignored the brain completion mechanism. In this work, we propose to reconstruct seen images with both the visual perception and the brain completion process, and design a simple, yet effective visual decoding framework to achieve this goal. Specifically, we first construct a shared discrete representation space for both brain signals and images. Then, a novel self-supervised token-to-token inpainting network is designed to implement visual content completion by building context and prior knowledge about the visual objects from the discrete latent space. Our approach improved the quality of visual reconstruction significantly and achieved state-of-the-art. |
https://proceedings.mlr.press/v202/chen23w.html | https://proceedings.mlr.press/v202/chen23w/chen23w.pdf | https://openreview.net/forum?id=LZt1HIEoAf | Stratified Adversarial Robustness with Rejection | https://proceedings.mlr.press/v202/chen23w.html | Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, Yingyu Liang, Somesh Jha | https://proceedings.mlr.press/v202/chen23w.html | ICML 2023 | Recently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While rejection can incur a cost in many applications, existing studies typically associate zero cost with rejecting perturbed inputs, which can result in the rejection of numerous slightly-perturbed inputs that could be correctly classified. In this work, we study adversarially-robust classification with rejection in the stratified rejection setting, where the rejection cost is modeled by rejection loss functions monotonically non-increasing in the perturbation magnitude. We theoretically analyze the stratified rejection setting and propose a novel defense method – Adversarial Training with Consistent Prediction-based Rejection (CPR) – for building a robust selective classifier. Experiments on image datasets demonstrate that the proposed method significantly outperforms existing methods under strong adaptive attacks. For instance, on CIFAR-10, CPR reduces the total robust loss (for different rejection losses) by at least 7.3% under both seen and unseen attacks. |
https://proceedings.mlr.press/v202/chen23x.html | https://proceedings.mlr.press/v202/chen23x/chen23x.pdf | https://openreview.net/forum?id=RBikc9cIZh | Multi-task Hierarchical Adversarial Inverse Reinforcement Learning | https://proceedings.mlr.press/v202/chen23x.html | Jiayu Chen, Dipesh Tamboli, Tian Lan, Vaneet Aggarwal | https://proceedings.mlr.press/v202/chen23x.html | ICML 2023 | Multi-task Imitation Learning (MIL) aims to train a policy capable of performing a distribution of tasks based on multi-task expert demonstrations, which is essential for general-purpose robots. Existing MIL algorithms suffer from low data efficiency and poor performance on complex long-horizontal tasks. We develop Multi-task Hierarchical Adversarial Inverse Reinforcement Learning (MH-AIRL) to learn hierarchically-structured multi-task policies, which is more beneficial for compositional tasks with long horizons and has higher expert data efficiency through identifying and transferring reusable basic skills across tasks. To realize this, MH-AIRL effectively synthesizes context-based multi-task learning, AIRL (an IL approach), and hierarchical policy learning. Further, MH-AIRL can be adopted to demonstrations without the task or skill annotations (i.e., state-action pairs only) which are more accessible in practice. Theoretical justifications are provided for each module of MH-AIRL, and evaluations on challenging multi-task settings demonstrate superior performance and transferability of the multi-task policies learned with MH-AIRL as compared to SOTA MIL baselines. |
https://proceedings.mlr.press/v202/chen23y.html | https://proceedings.mlr.press/v202/chen23y/chen23y.pdf | https://openreview.net/forum?id=IGfmSM7siu | Model Transferability with Responsive Decision Subjects | https://proceedings.mlr.press/v202/chen23y.html | Yatong Chen, Zeyu Tang, Kun Zhang, Yang Liu | https://proceedings.mlr.press/v202/chen23y.html | ICML 2023 | Given an algorithmic predictor that is accurate on some source population consisting of strategic human decision subjects, will it remain accurate if the population respond to it? In our setting, an agent or a user corresponds to a sample $(X,Y)$ drawn from a distribution $\cal{D}$ and will face a model $h$ and its classification result $h(X)$. Agents can modify $X$ to adapt to $h$, which will incur a distribution shift on $(X,Y)$. Our formulation is motivated by applications where the deployed machine learning models are subjected to human agents, and will ultimately face responsive and interactive data distributions. We formalize the discussions of the transferability of a model by studying how the performance of the model trained on the available source distribution (data) would translate to the performance on its induced domain. We provide both upper bounds for the performance gap due to the induced domain shift, as well as lower bounds for the trade-offs that a classifier has to suffer on either the source training distribution or the induced target distribution. We provide further instantiated analysis for two popular domain adaptation settings, including covariate shift and target shift. |
https://proceedings.mlr.press/v202/chen23z.html | https://proceedings.mlr.press/v202/chen23z/chen23z.pdf | https://openreview.net/forum?id=ivXO8yB9JE | Layered State Discovery for Incremental Autonomous Exploration | https://proceedings.mlr.press/v202/chen23z.html | Liyu Chen, Andrea Tirinzoni, Alessandro Lazaric, Matteo Pirotta | https://proceedings.mlr.press/v202/chen23z.html | ICML 2023 | We study the autonomous exploration (AX) problem proposed by Lim & Auer (2012). In this setting, the objective is to discover a set of $\epsilon$-optimal policies reaching a set $\mathcal{S}_L^{\rightarrow}$ of incrementally $L$-controllable states. We introduce a novel layered decomposition of the set of incrementally $L$-controllable states that is based on the iterative application of a state-expansion operator. We leverage these results to design Layered Autonomous Exploration (LAE), a novel algorithm for AX that attains a sample complexity of $\tilde{\mathcal{O}}(LS^{\rightarrow}_{L(1+\epsilon)}\Gamma_{L(1+\epsilon)} A \ln^{12}(S^{\rightarrow}_{L(1+\epsilon)})/\epsilon^2)$, where $S^{\rightarrow}_{L(1+\epsilon)}$ is the number of states that are incrementally $L(1+\epsilon)$-controllable, $A$ is the number of actions, and $\Gamma_{L(1+\epsilon)}$ is the branching factor of the transitions over such states. LAE improves over the algorithm of Tarbouriech et al. (2020a) by a factor of $L^2$ and it is the first algorithm for AX that works in a countably-infinite state space. Moreover, we show that, under a certain identifiability assumption, LAE achieves minimax-optimal sample complexity of $\tilde{\mathcal{O}}(LS^{\rightarrow}_{L}A\ln^{12}(S^{\rightarrow}_{L})/\epsilon^2)$, outperforming existing algorithms and matching for the first time the lower bound proved by Cai et al. (2022) up to logarithmic factors. |
https://proceedings.mlr.press/v202/chen23aa.html | https://proceedings.mlr.press/v202/chen23aa/chen23aa.pdf | https://openreview.net/forum?id=xJp7rnXt1I | Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization | https://proceedings.mlr.press/v202/chen23aa.html | Sijia Chen, Wei-Wei Tu, Peng Zhao, Lijun Zhang | https://proceedings.mlr.press/v202/chen23aa.html | ICML 2023 | Stochastically Extended Adversarial (SEA) model is introduced by Sachs et al. (2022) as an interpolation between stochastic and adversarial online convex optimization. Under the smoothness condition, they demonstrate that the expected regret of optimistic follow-the-regularized-leader (FTRL) depends on the cumulative stochastic variance $\sigma_{1:T}^2$ and the cumulative adversarial variation $\Sigma_{1:T}^2$ for convex functions. They also provide a slightly weaker bound based on the maximal stochastic variance $\sigma_{\max}^2$ and the maximal adversarial variation $\Sigma_{\max}^2$ for strongly convex functions. Inspired by their work, we investigate the theoretical guarantees of optimistic online mirror descent (OMD) for the SEA model. For convex and smooth functions, we obtain the same $\mathcal{O}(\sqrt{\sigma_{1:T}^2}+\sqrt{\Sigma_{1:T}^2})$ regret bound, without the convexity requirement of individual functions. For strongly convex and smooth functions, we establish an $\mathcal{O}(\min\{\log (\sigma_{1:T}^2+\Sigma_{1:T}^2), (\sigma_{\max}^2 + \Sigma_{\max}^2) \log T\})$ bound, better than their $\mathcal{O}((\sigma_{\max}^2 + \Sigma_{\max}^2) \log T)$ result. For exp-concave and smooth functions, we achieve a new $\mathcal{O}(d\log(\sigma_{1:T}^2+\Sigma_{1:T}^2))$ bound. Owing to the OMD framework, we further establish dynamic regret for convex and smooth functions, which is more favorable in non-stationary online scenarios. |
https://proceedings.mlr.press/v202/chen23ab.html | https://proceedings.mlr.press/v202/chen23ab/chen23ab.pdf | https://openreview.net/forum?id=cfUDirIjOd | Learning to Optimize Differentiable Games | https://proceedings.mlr.press/v202/chen23ab.html | Xuxi Chen, Nelson Vadori, Tianlong Chen, Zhangyang Wang | https://proceedings.mlr.press/v202/chen23ab.html | ICML 2023 | Many machine learning problems can be abstracted in solving game theory formulations and boil down to optimizing nested objectives, such as generative adversarial networks (GANs) and multi-agent reinforcement learning. Solving these games requires finding their stable fixed points or Nash equilibrium. However, existing algorithms for solving games suffer from empirical instability, hence demanding heavy ad-hoc tuning in practice. To tackle these challenges, we resort to the emerging scheme of Learning to Optimize (L2O), which discovers problem-specific efficient optimization algorithms through data-driven training. Our customized L2O framework for differentiable game theory problems, dubbed “Learning to Play Games" (L2PG), seeks a stable fixed point solution, by predicting the fast update direction from the past trajectory, with a novel gradient stability-aware, sign-based loss function. We further incorporate curriculum learning and self-learning to strengthen the empirical training stability and generalization of L2PG. On test problems including quadratic games and GANs, L2PG can substantially accelerate the convergence, and demonstrates a remarkably more stable trajectory. Codes are available at https://github.com/VITA-Group/L2PG. |
https://proceedings.mlr.press/v202/chen23ac.html | https://proceedings.mlr.press/v202/chen23ac/chen23ac.pdf | https://openreview.net/forum?id=P78iqH28ni | Coordinated Dynamic Bidding in Repeated Second-Price Auctions with Budgets | https://proceedings.mlr.press/v202/chen23ac.html | Yurong Chen, Qian Wang, Zhijian Duan, Haoran Sun, Zhaohua Chen, Xiang Yan, Xiaotie Deng | https://proceedings.mlr.press/v202/chen23ac.html | ICML 2023 | In online ad markets, a rising number of advertisers are employing bidding agencies to participate in ad auctions. These agencies are specialized in designing online algorithms and bidding on behalf of their clients. Typically, an agency usually has information on multiple advertisers, so she can potentially coordinate bids to help her clients achieve higher utilities than those under independent bidding. In this paper, we study coordinated online bidding algorithms in repeated second-price auctions with budgets. We propose algorithms that guarantee every client a higher utility than the best she can get under independent bidding. We show that these algorithms achieve maximal social welfare and discuss bidders’ incentives to misreport their budgets, in symmetric cases. Our proofs combine the techniques of online learning and equilibrium analysis, overcoming the difficulty of competing with a multi-dimensional benchmark. The performance of our algorithms is further evaluated by experiments on both synthetic and real data. To the best of our knowledge, we are the first to consider bidder coordination in online repeated auctions with constraints. |
https://proceedings.mlr.press/v202/chen23ad.html | https://proceedings.mlr.press/v202/chen23ad/chen23ad.pdf | https://openreview.net/forum?id=fscQU9Wufk | Semi-Offline Reinforcement Learning for Optimized Text Generation | https://proceedings.mlr.press/v202/chen23ad.html | Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan | https://proceedings.mlr.press/v202/chen23ad.html | ICML 2023 | Existing reinforcement learning (RL) mainly utilize online or offline settings. The online methods explore the environment with expensive time cost, and the offline methods efficiently obtain reward signals by sacrificing the exploration capability. We propose semi-offline RL, a novel paradigm that can smoothly transit from the offline setting to the online setting, balances the exploration capability and training cost, and provides a theoretical foundation for comparing different RL settings. Based on the semi-offline MDP formulation, we present the RL setting that is optimal in terms of optimization cost, asymptotic error, and overfitting error bound. Extensive experiments show that our semi-offline RL approach is effective in various text generation tasks and datasets, and yields comparable or usually better performance compared with the state-of-the-art methods. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.