Dataset Viewer
Auto-converted to Parquet
title
stringlengths
10
192
authors
stringlengths
7
342
abstract
stringlengths
82
4.51k
url
stringlengths
44
59
detail_url
stringlengths
44
59
abs
stringlengths
44
59
OpenReview
stringclasses
1 value
Download PDF
stringlengths
47
77
tags
stringclasses
1 value
A New Representation of Successor Features for Transfer across Dissimilar Environments
Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh
Transfer in reinforcement learning is usually achieved through generalisation across tasks. Whilst many studies have investigated transferring knowledge when the reward function changes, they have assumed that the dynamics of the environments remain consistent. Many real-world RL problems require transfer among environments with different dynamics. To address this problem, we propose an approach based on successor features in which we model successor feature functions with Gaussian Processes permitting the source successor features to be treated as noisy measurements of the target successor feature function. Our theoretical analysis proves the convergence of this approach as well as the bounded error on modelling successor feature functions with Gaussian Processes in environments with both different dynamics and rewards. We demonstrate our method on benchmark datasets and show that it outperforms current baselines.
https://proceedings.mlr.press/v139/abdolshah21a.html
https://proceedings.mlr.press/v139/abdolshah21a.html
https://proceedings.mlr.press/v139/abdolshah21a.html
http://proceedings.mlr.press/v139/abdolshah21a/abdolshah21a.pdf
ICML 2021
Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling
Kuruge Darshana Abeyrathna, Bimal Bhattarai, Morten Goodwin, Saeed Rahimi Gorji, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, Rohan K. Yadav
Using logical clauses to represent patterns, Tsetlin Machine (TM) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the evaluation of clauses is fast, being based on binary operators, the voting makes it necessary to synchronize the clause evaluation, impeding parallelization. In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. In brief, every clause runs in its own thread for massive native parallelism. For each training example, we keep track of the class votes obtained from the clauses in local voting tallies. The local voting tallies allow us to detach the processing of each clause from the rest of the clauses, supporting decentralized learning. This means that the TM most of the time will operate on outdated voting tallies. We evaluated the proposed parallelization across diverse learning tasks and it turns out that our decentralized TM learning algorithm copes well with working on outdated data, resulting in no significant loss in learning accuracy. Furthermore, we show that the approach provides up to 50 times faster learning. Finally, learning time is almost constant for reasonable clause amounts (employing from 20 to 7,000 clauses on a Tesla V100 GPU). For sufficiently large clause numbers, computation time increases approximately proportionally. Our parallel and asynchronous architecture thus allows processing of more massive datasets and operating with more clauses for higher accuracy.
https://proceedings.mlr.press/v139/abeyrathna21a.html
https://proceedings.mlr.press/v139/abeyrathna21a.html
https://proceedings.mlr.press/v139/abeyrathna21a.html
http://proceedings.mlr.press/v139/abeyrathna21a/abeyrathna21a.pdf
ICML 2021
Debiasing Model Updates for Improving Personalized Federated Training
Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas, Matthew Mattina, Paul Whatmough, Venkatesh Saligrama
We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.
https://proceedings.mlr.press/v139/acar21a.html
https://proceedings.mlr.press/v139/acar21a.html
https://proceedings.mlr.press/v139/acar21a.html
http://proceedings.mlr.press/v139/acar21a/acar21a.pdf
ICML 2021
Memory Efficient Online Meta Learning
Durmus Alp Emre Acar, Ruizhao Zhu, Venkatesh Saligrama
We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. A fundamental concern arising in online meta-learning is the scalability of memory as more tasks are viewed over time. Heretofore, prior works have allowed for perfect recall leading to linear increase in memory with time. Different from prior works, in our method, prior task instances are allowed to be deleted. We propose to leverage prior task instances by means of a fixed-size state-vector, which is updated sequentially. Our theoretical analysis demonstrates that our proposed memory efficient online learning (MOML) method suffers sub-linear regret with convex loss functions and sub-linear local regret for nonconvex losses. On benchmark datasets we show that our method can outperform prior works even though they allow for perfect recall.
https://proceedings.mlr.press/v139/acar21b.html
https://proceedings.mlr.press/v139/acar21b.html
https://proceedings.mlr.press/v139/acar21b.html
http://proceedings.mlr.press/v139/acar21b/acar21b.pdf
ICML 2021
Robust Testing and Estimation under Manipulation Attacks
Jayadev Acharya, Ziteng Sun, Huanyu Zhang
We study robust testing and estimation of discrete distributions in the strong contamination model. Our results cover both centralized setting and distributed setting with general local information constraints including communication and LDP constraints. Our technique relates the strength of manipulation attacks to the earth-mover distance using Hamming distance as the metric between messages (samples) from the users. In the centralized setting, we provide optimal error bounds for both learning and testing. Our lower bounds under local information constraints build on the recent lower bound methods in distributed inference. In the communication constrained setting, we develop novel algorithms based on random hashing and an L1-L1 isometry.
https://proceedings.mlr.press/v139/acharya21a.html
https://proceedings.mlr.press/v139/acharya21a.html
https://proceedings.mlr.press/v139/acharya21a.html
http://proceedings.mlr.press/v139/acharya21a/acharya21a.pdf
ICML 2021
GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning
Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
Gaussian processes (GPs) are non-parametric, flexible, models that work well in many tasks. Combining GPs with deep learning methods via deep kernel learning (DKL) is especially compelling due to the strong representational power induced by the network. However, inference in GPs, whether with or without DKL, can be computationally challenging on large datasets. Here, we propose GP-Tree, a novel method for multi-class classification with Gaussian processes and DKL. We develop a tree-based hierarchical model in which each internal node of the tree fits a GP to the data using the P{ó}lya-Gamma augmentation scheme. As a result, our method scales well with both the number of classes and data size. We demonstrate the effectiveness of our method against other Gaussian process training baselines, and we show how our general GP approach achieves improved accuracy on standard incremental few-shot learning benchmarks.
https://proceedings.mlr.press/v139/achituve21a.html
https://proceedings.mlr.press/v139/achituve21a.html
https://proceedings.mlr.press/v139/achituve21a.html
http://proceedings.mlr.press/v139/achituve21a/achituve21a.pdf
ICML 2021
f-Domain Adversarial Learning: Theory and Algorithms
David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences. It recovers the theoretical results from Ben-David et al. (2010a) as a special case and supports divergences used in practice. Based on this bound, we derive a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016). We show that many regularizers and ad-hoc objectives introduced over the last years in this framework are then not required to achieve performance comparable to (if not better than) state-of-the-art domain-adversarial methods. Experimental analysis conducted on real-world natural language and computer vision datasets show that our framework outperforms existing baselines, and obtains the best results for f-divergences that were not considered previously in domain-adversarial learning.
https://proceedings.mlr.press/v139/acuna21a.html
https://proceedings.mlr.press/v139/acuna21a.html
https://proceedings.mlr.press/v139/acuna21a.html
http://proceedings.mlr.press/v139/acuna21a/acuna21a.pdf
ICML 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar, Vincent Guigue, Romain Hennequin
Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. Task-dependent by nature, precise definitions of "relevance" encountered in the literature are however not always consistent. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.
https://proceedings.mlr.press/v139/afchar21a.html
https://proceedings.mlr.press/v139/afchar21a.html
https://proceedings.mlr.press/v139/afchar21a.html
http://proceedings.mlr.press/v139/afchar21a/afchar21a.pdf
ICML 2021
Acceleration via Fractal Learning Rate Schedules
Naman Agarwal, Surbhi Goel, Cyril Zhang
In practical applications of iterative first-order optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from the numerical analysis literature as what we call the Chebyshev learning rate schedule for accelerating vanilla gradient descent, and show that the problem of mitigating instability leads to a fractal ordering of step sizes. We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective.
https://proceedings.mlr.press/v139/agarwal21a.html
https://proceedings.mlr.press/v139/agarwal21a.html
https://proceedings.mlr.press/v139/agarwal21a.html
http://proceedings.mlr.press/v139/agarwal21a/agarwal21a.pdf
ICML 2021
A Regret Minimization Approach to Iterative Learning Control
Naman Agarwal, Elad Hazan, Anirudha Majumdar, Karan Singh
We consider the setting of iterative learning control, or model-based policy learning in the presence of uncertain, time-varying dynamics. In this setting, we propose a new performance metric, planning regret, which replaces the standard stochastic uncertainty assumptions with worst case regret. Based on recent advances in non-stochastic control, we design a new iterative algorithm for minimizing planning regret that is more robust to model mismatch and uncertainty. We provide theoretical and empirical evidence that the proposed algorithm outperforms existing methods on several benchmarks.
https://proceedings.mlr.press/v139/agarwal21b.html
https://proceedings.mlr.press/v139/agarwal21b.html
https://proceedings.mlr.press/v139/agarwal21b.html
http://proceedings.mlr.press/v139/agarwal21b/agarwal21b.pdf
ICML 2021
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the explanations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desirable properties, such as robustness, for these techniques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real-world datasets.
https://proceedings.mlr.press/v139/agarwal21c.html
https://proceedings.mlr.press/v139/agarwal21c.html
https://proceedings.mlr.press/v139/agarwal21c.html
http://proceedings.mlr.press/v139/agarwal21c/agarwal21c.pdf
ICML 2021
Label Inference Attacks from Log-loss Scores
Abhinav Aggarwal, Shiva Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms. In this paper, we investigate the problem of inferring the labels of a dataset from single (or multiple) log-loss score(s), without any other access to the dataset. Surprisingly, we show that for any finite number of label classes, it is possible to accurately infer the labels of the dataset from the reported log-loss score of a single carefully constructed prediction vector if we allow arbitrary precision arithmetic. Additionally, we present label inference algorithms (attacks) that succeed even under addition of noise to the log-loss scores and under limited precision arithmetic. All our algorithms rely on ideas from number theory and combinatorics and require no model training. We run experimental simulations on some real datasets to demonstrate the ease of running these attacks in practice.
https://proceedings.mlr.press/v139/aggarwal21a.html
https://proceedings.mlr.press/v139/aggarwal21a.html
https://proceedings.mlr.press/v139/aggarwal21a.html
http://proceedings.mlr.press/v139/aggarwal21a/aggarwal21a.pdf
ICML 2021
Deep kernel processes
Laurence Aitchison, Adam Yang, Sebastian W. Ober
We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. Remarkably, we find that deep Gaussian processes (DGPs), Bayesian neural networks (BNNs), infinite BNNs, and infinite BNNs with bottlenecks can all be written as deep kernel processes. For DGPs the equivalence arises because the Gram matrix formed by the inner product of features is Wishart distributed, and as we show, standard isotropic kernels can be written entirely in terms of this Gram matrix — we do not need knowledge of the underlying features. We define a tractable deep kernel process, the deep inverse Wishart process, and give a doubly-stochastic inducing-point variational inference scheme that operates on the Gram matrices, not on the features, as in DGPs. We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on fully-connected baselines.
https://proceedings.mlr.press/v139/aitchison21a.html
https://proceedings.mlr.press/v139/aitchison21a.html
https://proceedings.mlr.press/v139/aitchison21a.html
http://proceedings.mlr.press/v139/aitchison21a/aitchison21a.pdf
ICML 2021
How Does Loss Function Affect Generalization Performance of Deep Learning? Application to Human Age Estimation
Ali Akbari, Muhammad Awais, Manijeh Bashar, Josef Kittler
Good generalization performance across a wide variety of domains caused by many external and internal factors is the fundamental goal of any machine learning algorithm. This paper theoretically proves that the choice of loss function matters for improving the generalization performance of deep learning-based systems. By deriving the generalization error bound for deep neural models trained by stochastic gradient descent, we pinpoint the characteristics of the loss function that is linked to the generalization error and can therefore be used for guiding the loss function selection process. In summary, our main statement in this paper is: choose a stable loss function, generalize better. Focusing on human age estimation from the face which is a challenging topic in computer vision, we then propose a novel loss function for this learning problem. We theoretically prove that the proposed loss function achieves stronger stability, and consequently a tighter generalization error bound, compared to the other common loss functions for this problem. We have supported our findings theoretically, and demonstrated the merits of the guidance process experimentally, achieving significant improvements.
https://proceedings.mlr.press/v139/akbari21a.html
https://proceedings.mlr.press/v139/akbari21a.html
https://proceedings.mlr.press/v139/akbari21a.html
http://proceedings.mlr.press/v139/akbari21a/akbari21a.pdf
ICML 2021
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting
Shunta Akiyama, Taiji Suzuki
Deep learning empirically achieves high performance in many applications, but its training dynamics has not been fully understood theoretically. In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. We show that with a specific regularization and sufficient over-parameterization, the student network can identify the parameters of the teacher network with high probability via gradient descent with a norm dependent stepsize even though the objective function is highly non-convex. The key theoretical tool is the measure representation of the neural networks and a novel application of a dual certificate argument for sparse estimation on a measure space. We analyze the global minima and global convergence property in the measure space.
https://proceedings.mlr.press/v139/akiyama21a.html
https://proceedings.mlr.press/v139/akiyama21a.html
https://proceedings.mlr.press/v139/akiyama21a.html
http://proceedings.mlr.press/v139/akiyama21a/akiyama21a.pdf
ICML 2021
Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks
Maxwell M Aladago, Lorenzo Torresani
In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. By selecting a weight among a fixed set of random values for each individual connection, our method uncovers combinations of random weights that match the performance of traditionally-trained networks of the same capacity. We refer to our networks as "slot machines" where each reel (connection) contains a fixed set of symbols (random values). Our backpropagation algorithm "spins" the reels to seek "winning" combinations, i.e., selections of random weight values that minimize the given loss. Quite surprisingly, we find that allocating just a few random values to each connection (e.g., 8 values per connection) yields highly competitive combinations despite being dramatically more constrained compared to traditionally learned weights. Moreover, finetuning these combinations often improves performance over the trained baselines. A randomly initialized VGG-19 with 8 values per connection contains a combination that achieves 91% test accuracy on CIFAR-10. Our method also achieves an impressive performance of 98.2% on MNIST for neural networks containing only random weights.
https://proceedings.mlr.press/v139/aladago21a.html
https://proceedings.mlr.press/v139/aladago21a.html
https://proceedings.mlr.press/v139/aladago21a.html
http://proceedings.mlr.press/v139/aladago21a/aladago21a.pdf
ICML 2021
A large-scale benchmark for few-shot program induction and synthesis
Ferran Alet, Javier Lopez-Contreras, James Koppel, Maxwell Nye, Armando Solar-Lezama, Tomas Lozano-Perez, Leslie Kaelbling, Joshua Tenenbaum
A landmark challenge for AI is to learn flexible, powerful representations from small numbers of examples. On an important class of tasks, hypotheses in the form of programs provide extreme generalization capabilities from surprisingly few examples. However, whereas large natural few-shot learning image benchmarks have spurred progress in meta-learning for deep networks, there is no comparably big, natural program-synthesis dataset that can play a similar role. This is because, whereas images are relatively easy to label from internet meta-data or annotated by non-experts, generating meaningful input-output examples for program induction has proven hard to scale. In this work, we propose a new way of leveraging unit tests and natural inputs for small programs as meaningful input-output examples for each sub-program of the overall program. This allows us to create a large-scale naturalistic few-shot program-induction benchmark and propose new challenges in this domain. The evaluation of multiple program induction and synthesis algorithms points to shortcomings of current methods and suggests multiple avenues for future work.
https://proceedings.mlr.press/v139/alet21a.html
https://proceedings.mlr.press/v139/alet21a.html
https://proceedings.mlr.press/v139/alet21a.html
http://proceedings.mlr.press/v139/alet21a/alet21a.pdf
ICML 2021
Robust Pure Exploration in Linear Bandits with Limited Budget
Ayya Alieva, Ashok Cutkosky, Abhimanyu Das
We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior approaches to pure exploration in the multi-armed bandit problem with optimal experimental design algorithms to obtain both problem dependent and problem independent bounds. Our success probability is never worse than that of an algorithm that ignores the linear structure, but seamlessly takes advantage of such structure when possible. Furthermore, we only need the number of samples to scale with the dimension of the problem rather than the number of arms. We complement our theoretical results with empirical validation.
https://proceedings.mlr.press/v139/alieva21a.html
https://proceedings.mlr.press/v139/alieva21a.html
https://proceedings.mlr.press/v139/alieva21a.html
http://proceedings.mlr.press/v139/alieva21a/alieva21a.pdf
ICML 2021
Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis, Peter Davies, Dan Alistarh
We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. Most previous communication-efficient approaches for this problem are limited to first-order optimization, and therefore have \emph{linear} dependence on the condition number in their communication complexity. We show that this dependence is not inherent: communication-efficient methods can in fact have sublinear dependence on the condition number. For this, we design and analyze the first communication-efficient distributed variants of preconditioned gradient descent for Generalized Linear Models, and for Newton’s method. Our results rely on a new technique for quantizing both the preconditioner and the descent direction at each step of the algorithms, while controlling their convergence rate. We also validate our findings experimentally, showing faster convergence and reduced communication relative to previous methods.
https://proceedings.mlr.press/v139/alimisis21a.html
https://proceedings.mlr.press/v139/alimisis21a.html
https://proceedings.mlr.press/v139/alimisis21a.html
http://proceedings.mlr.press/v139/alimisis21a/alimisis21a.pdf
ICML 2021
Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions
Pierre Alquier
We tackle the problem of online optimization with a general, possibly unbounded, loss function. It is well known that when the loss is bounded, the exponentially weighted aggregation strategy (EWA) leads to a regret in $\sqrt{T}$ after $T$ steps. In this paper, we study a generalized aggregation strategy, where the weights no longer depend exponentially on the losses. Our strategy is based on Follow The Regularized Leader (FTRL): we minimize the expected losses plus a regularizer, that is here a $\phi$-divergence. When the regularizer is the Kullback-Leibler divergence, we obtain EWA as a special case. Using alternative divergences enables unbounded losses, at the cost of a worst regret bound in some cases.
https://proceedings.mlr.press/v139/alquier21a.html
https://proceedings.mlr.press/v139/alquier21a.html
https://proceedings.mlr.press/v139/alquier21a.html
http://proceedings.mlr.press/v139/alquier21a/alquier21a.pdf
ICML 2021
Dataset Dynamics via Gradient Flows in Probability Space
David Alvarez-Melis, Nicolò Fusi
Various machine learning tasks, from generative modeling to domain adaptation, revolve around the concept of dataset transformation and manipulation. While various methods exist for transforming unlabeled datasets, principled methods to do so for labeled (e.g., classification) datasets are missing. In this work, we propose a novel framework for dataset transformation, which we cast as optimization over data-generating joint probability distributions. We approach this class of problems through Wasserstein gradient flows in probability space, and derive practical and efficient particle-based methods for a flexible but well-behaved class of objective functions. Through various experiments, we show that this framework can be used to impose constraints on classification datasets, adapt them for transfer learning, or to re-purpose fixed or black-box models to classify {—}with high accuracy{—} previously unseen datasets.
https://proceedings.mlr.press/v139/alvarez-melis21a.html
https://proceedings.mlr.press/v139/alvarez-melis21a.html
https://proceedings.mlr.press/v139/alvarez-melis21a.html
http://proceedings.mlr.press/v139/alvarez-melis21a/alvarez-melis21a.pdf
ICML 2021
Submodular Maximization subject to a Knapsack Constraint: Combinatorial Algorithms with Near-optimal Adaptive Complexity
Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi, Alberto Marchetti-Spaccamela, Rebecca Reiffenhäuser
The growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the \emph{adaptive complexity}, capturing the number of sequential rounds of parallel computation needed. In this work we obtain the first \emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with \emph{near-optimal} $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: one needs to account for the total number of function evaluations (or value queries) as well. Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first \emph{combinatorial} approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives. Finally, we showcase our algorithms’ applicability on real-world datasets.
https://proceedings.mlr.press/v139/amanatidis21a.html
https://proceedings.mlr.press/v139/amanatidis21a.html
https://proceedings.mlr.press/v139/amanatidis21a.html
http://proceedings.mlr.press/v139/amanatidis21a/amanatidis21a.pdf
ICML 2021
Safe Reinforcement Learning with Linear Function Approximation
Sanae Amani, Christos Thrampoulidis, Lin Yang
Safety in reinforcement learning has become increasingly important in recent years. Yet, existing solutions either fail to strictly avoid choosing unsafe actions, which may lead to catastrophic results in safety-critical systems, or fail to provide regret guarantees for settings where safety constraints need to be learned. In this paper, we address both problems by first modeling safety as an unknown linear cost function of states and actions, which must always fall below a certain threshold. We then present algorithms, termed SLUCB-QVI and RSLUCB-QVI, for episodic Markov decision processes (MDPs) with linear function approximation. We show that SLUCB-QVI and RSLUCB-QVI, while with \emph{no safety violation}, achieve a $\tilde{\mathcal{O}}\left(\kappa\sqrt{d^3H^3T}\right)$ regret, nearly matching that of state-of-the-art unsafe algorithms, where $H$ is the duration of each episode, $d$ is the dimension of the feature mapping, $\kappa$ is a constant characterizing the safety constraints, and $T$ is the total number of action plays. We further present numerical simulations that corroborate our theoretical findings.
https://proceedings.mlr.press/v139/amani21a.html
https://proceedings.mlr.press/v139/amani21a.html
https://proceedings.mlr.press/v139/amani21a.html
http://proceedings.mlr.press/v139/amani21a/amani21a.pdf
ICML 2021
Automatic variational inference with cascading flows
Luca Ambrogioni, Gianluigi Silvestri, Marcel van Gerven
The automation of probabilistic reasoning is one of the primary aims of machine learning. Recently, the confluence of variational inference and deep learning has led to powerful and flexible automatic inference methods that can be trained by stochastic gradient descent. In particular, normalizing flows are highly parameterized deep models that can fit arbitrarily complex posterior densities. However, normalizing flows struggle in highly structured probabilistic programs as they need to relearn the forward-pass of the program. Automatic structured variational inference (ASVI) remedies this problem by constructing variational programs that embed the forward-pass. Here, we combine the flexibility of normalizing flows and the prior-embedding property of ASVI in a new family of variational programs, which we named cascading flows. A cascading flows program interposes a newly designed highway flow architecture in between the conditional distributions of the prior program such as to steer it toward the observed data. These programs can be constructed automatically from an input probabilistic program and can also be amortized automatically. We evaluate the performance of the new variational programs in a series of structured inference problems. We find that cascading flows have much higher performance than both normalizing flows and ASVI in a large set of structured inference problems.
https://proceedings.mlr.press/v139/ambrogioni21a.html
https://proceedings.mlr.press/v139/ambrogioni21a.html
https://proceedings.mlr.press/v139/ambrogioni21a.html
http://proceedings.mlr.press/v139/ambrogioni21a/ambrogioni21a.pdf
ICML 2021
Sparse Bayesian Learning via Stepwise Regression
Sebastian E. Ament, Carla P. Gomes
Sparse Bayesian Learning (SBL) is a powerful framework for attaining sparsity in probabilistic models. Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression. Further, we derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP. Our guarantees for Forward Regression improve on deterministic and probabilistic results for Orthogonal Matching Pursuit with noise. Our analysis of Backward Regression culminates in a bound on the residual of the optimal solution to the subset selection problem that, if satisfied, guarantees the optimality of the result. To our knowledge, this bound is the first that can be computed in polynomial time and depends chiefly on the smallest singular value of the matrix. We report numerical experiments using a variety of feature selection algorithms. Notably, RMP and its limiting variant are both efficient and maintain strong performance with correlated features.
https://proceedings.mlr.press/v139/ament21a.html
https://proceedings.mlr.press/v139/ament21a.html
https://proceedings.mlr.press/v139/ament21a.html
http://proceedings.mlr.press/v139/ament21a/ament21a.pdf
ICML 2021
Locally Persistent Exploration in Continuous Control Tasks with Sparse Rewards
Susan Amin, Maziar Gomrokchi, Hossein Aboutalebi, Harsh Satija, Doina Precup
A major challenge in reinforcement learning is the design of exploration strategies, especially for environments with sparse reward structures and continuous state and action spaces. Intuitively, if the reinforcement signal is very scarce, the agent should rely on some form of short-term memory in order to cover its environment efficiently. We propose a new exploration method, based on two intuitions: (1) the choice of the next exploratory action should depend not only on the (Markovian) state of the environment, but also on the agent’s trajectory so far, and (2) the agent should utilize a measure of spread in the state space to avoid getting stuck in a small region. Our method leverages concepts often used in statistical physics to provide explanations for the behavior of simplified (polymer) chains in order to generate persistent (locally self-avoiding) trajectories in state space. We discuss the theoretical properties of locally self-avoiding walks and their ability to provide a kind of short-term memory through a decaying temporal correlation within the trajectory. We provide empirical evaluations of our approach in a simulated 2D navigation task, as well as higher-dimensional MuJoCo continuous control locomotion tasks with sparse rewards.
https://proceedings.mlr.press/v139/amin21a.html
https://proceedings.mlr.press/v139/amin21a.html
https://proceedings.mlr.press/v139/amin21a.html
http://proceedings.mlr.press/v139/amin21a/amin21a.pdf
ICML 2021
Preferential Temporal Difference Learning
Nishanth Anand, Doina Precup
Temporal-Difference (TD) learning is a general and very useful tool for estimating the value function of a given policy, which in turn is required to find good policies. Generally speaking, TD learning updates states whenever they are visited. When the agent lands in a state, its value can be used to compute the TD-error, which is then propagated to other states. However, it may be interesting, when computing updates, to take into account other information than whether a state is visited or not. For example, some states might be more important than others (such as states which are frequently seen in a successful trajectory). Or, some states might have unreliable value estimates (for example, due to partial observability or lack of data), making their values less desirable as targets. We propose an approach to re-weighting states used in TD updates, both when they are the input and when they provide the target for the update. We prove that our approach converges with linear function approximation and illustrate its desirable empirical behaviour compared to other TD-style methods.
https://proceedings.mlr.press/v139/anand21a.html
https://proceedings.mlr.press/v139/anand21a.html
https://proceedings.mlr.press/v139/anand21a.html
http://proceedings.mlr.press/v139/anand21a/anand21a.pdf
ICML 2021
Unitary Branching Programs: Learnability and Lower Bounds
Fidel Ernesto Diaz Andino, Maria Kokkou, Mateus De Oliveira Oliveira, Farhad Vadiee
Bounded width branching programs are a formalism that can be used to capture the notion of non-uniform constant-space computation. In this work, we study a generalized version of bounded width branching programs where instructions are defined by unitary matrices of bounded dimension. We introduce a new learning framework for these branching programs that leverages on a combination of local search techniques with gradient descent over Riemannian manifolds. We also show that gapped, read-once branching programs of bounded dimension can be learned with a polynomial number of queries in the presence of a teacher. Finally, we provide explicit near-quadratic size lower-bounds for bounded-dimension unitary branching programs, and exponential size lower-bounds for bounded-dimension read-once gapped unitary branching programs. The first lower bound is proven using a combination of Neciporuk’s lower bound technique with classic results from algebraic geometry. The second lower bound is proven within the framework of communication complexity theory.
https://proceedings.mlr.press/v139/andino21a.html
https://proceedings.mlr.press/v139/andino21a.html
https://proceedings.mlr.press/v139/andino21a.html
http://proceedings.mlr.press/v139/andino21a/andino21a.pdf
ICML 2021
The Logical Options Framework
Brandon Araki, Xiao Li, Kiran Vodrahalli, Jonathan Decastro, Micah Fry, Daniela Rus
Learning composable policies for environments with complex rules and tasks is a challenging problem. We introduce a hierarchical reinforcement learning framework called the Logical Options Framework (LOF) that learns policies that are satisfying, optimal, and composable. LOF efficiently learns policies that satisfy tasks by representing the task as an automaton and integrating it into learning and planning. We provide and prove conditions under which LOF will learn satisfying, optimal policies. And lastly, we show how LOF’s learned policies can be composed to satisfy unseen tasks with only 10-50 retraining steps on our benchmarks. We evaluate LOF on four tasks in discrete and continuous domains, including a 3D pick-and-place environment.
https://proceedings.mlr.press/v139/araki21a.html
https://proceedings.mlr.press/v139/araki21a.html
https://proceedings.mlr.press/v139/araki21a.html
http://proceedings.mlr.press/v139/araki21a/araki21a.pdf
ICML 2021
Annealed Flow Transport Monte Carlo
Michael Arbel, Alex Matthews, Arnaud Doucet
Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions. We propose here a novel Monte Carlo algorithm, Annealed Flow Transport (AFT), that builds upon AIS and SMC and combines them with normalizing flows (NFs) for improved performance. This method transports a set of particles using not only importance sampling (IS), Markov chain Monte Carlo (MCMC) and resampling steps - as in SMC, but also relies on NFs which are learned sequentially to push particles towards the successive annealed targets. We provide limit theorems for the resulting Monte Carlo estimates of the normalizing constant and expectations with respect to the target distribution. Additionally, we show that a continuous-time scaling limit of the population version of AFT is given by a Feynman–Kac measure which simplifies to the law of a controlled diffusion for expressive NFs. We demonstrate experimentally the benefits and limitations of our methodology on a variety of applications.
https://proceedings.mlr.press/v139/arbel21a.html
https://proceedings.mlr.press/v139/arbel21a.html
https://proceedings.mlr.press/v139/arbel21a.html
http://proceedings.mlr.press/v139/arbel21a/arbel21a.pdf
ICML 2021
Permutation Weighting
David Arbour, Drew Dimmery, Arjun Sondhi
A commonly applied approach for estimating causal effects from observational data is to apply weights which render treatments independent of observed pre-treatment covariates. Recently emphasis has been placed on deriving balancing weights which explicitly target this independence condition. In this work we introduce permutation weighting, a method for estimating balancing weights using a standard binary classifier (regardless of cardinality of treatment). A large class of probabilistic classifiers may be used in this method; the choice of loss for the classifier implies the particular definition of balance. We bound bias and variance in terms of the excess risk of the classifier, show that these disappear asymptotically, and demonstrate that our classification problem directly minimizes imbalance. Additionally, hyper-parameter tuning and model selection can be performed with standard cross-validation methods. Empirical evaluations indicate that permutation weighting provides favorable performance in comparison to existing methods.
https://proceedings.mlr.press/v139/arbour21a.html
https://proceedings.mlr.press/v139/arbour21a.html
https://proceedings.mlr.press/v139/arbour21a.html
http://proceedings.mlr.press/v139/arbour21a/arbour21a.pdf
ICML 2021
Analyzing the tree-layer structure of Deep Forests
Ludovic Arnould, Claire Boyer, Erwan Scornet
Random forests on the one hand, and neural networks on the other hand, have met great success in the machine learning community for their predictive performance. Combinations of both have been proposed in the literature, notably leading to the so-called deep forests (DF) (Zhou & Feng,2019). In this paper, our aim is not to benchmark DF performances but to investigate instead their underlying mechanisms. Additionally, DF architecture can be generally simplified into more simple and computationally efficient shallow forest networks. Despite some instability, the latter may outperform standard predictive tree-based methods. We exhibit a theoretical framework in which a shallow tree network is shown to enhance the performance of classical decision trees. In such a setting, we provide tight theoretical lower and upper bounds on its excess risk. These theoretical results show the interest of tree-network architectures for well-structured data provided that the first layer, acting as a data encoder, is rich enough.
https://proceedings.mlr.press/v139/arnould21a.html
https://proceedings.mlr.press/v139/arnould21a.html
https://proceedings.mlr.press/v139/arnould21a.html
http://proceedings.mlr.press/v139/arnould21a/arnould21a.pdf
ICML 2021
Dropout: Explicit Forms and Capacity Control
Raman Arora, Peter Bartlett, Poorya Mianjy, Nathan Srebro
We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix completion, where it induces a distribution-dependent regularizer that equals the weighted trace-norm of the product of the factors. In deep learning, we show that the distribution-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks. These developments enable us to give concrete generalization error bounds for the dropout algorithm in both matrix completion as well as training deep neural networks.
https://proceedings.mlr.press/v139/arora21a.html
https://proceedings.mlr.press/v139/arora21a.html
https://proceedings.mlr.press/v139/arora21a.html
http://proceedings.mlr.press/v139/arora21a/arora21a.pdf
ICML 2021
Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients
Artem Artemev, David R. Burt, Mark van der Wilk
We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix. We show that approximate maximum likelihood learning of model parameters by maximising our lower bound retains many benefits of the sparse variational approach while reducing the bias introduced into hyperparameter learning. The basis of our bound is a more careful analysis of the log-determinant term appearing in the log marginal likelihood, as well as using the method of conjugate gradients to derive tight lower bounds on the term involving a quadratic form. Our approach is a step forward in unifying methods relying on lower bound maximisation (e.g. variational methods) and iterative approaches based on conjugate gradients for training Gaussian processes. In experiments, we show improved predictive performance with our model for a comparable amount of training time compared to other conjugate gradient based approaches.
https://proceedings.mlr.press/v139/artemev21a.html
https://proceedings.mlr.press/v139/artemev21a.html
https://proceedings.mlr.press/v139/artemev21a.html
http://proceedings.mlr.press/v139/artemev21a/artemev21a.pdf
ICML 2021
Deciding What to Learn: A Rate-Distortion Approach
Dilip Arumugam, Benjamin Van Roy
Agents that learn to select optimal actions represent a prominent focus of the sequential decision-making literature. In the face of a complex environment or constraints on time and resources, however, aiming to synthesize such an optimal policy can become infeasible. These scenarios give rise to an important trade-off between the information an agent must acquire to learn and the sub-optimality of the resulting policy. While an agent designer has a preference for how this trade-off is resolved, existing approaches further require that the designer translate these preferences into a fixed learning target for the agent. In this work, leveraging rate-distortion theory, we automate this process such that the designer need only express their preferences via a single hyperparameter and the agent is endowed with the ability to compute its own learning targets that best achieve the desired trade-off. We establish a general bound on expected discounted regret for an agent that decides what to learn in this manner along with computational experiments that illustrate the expressiveness of designer preferences and even show improvements over Thompson sampling in identifying an optimal policy.
https://proceedings.mlr.press/v139/arumugam21a.html
https://proceedings.mlr.press/v139/arumugam21a.html
https://proceedings.mlr.press/v139/arumugam21a.html
http://proceedings.mlr.press/v139/arumugam21a/arumugam21a.pdf
ICML 2021
Private Adaptive Gradient Methods for Convex Optimization
Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar
We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm. We provide upper bounds on the regret of both algorithms and show that the bounds are (worst-case) optimal. As a consequence of our development, we show that our private versions of AdaGrad outperform adaptive SGD, which in turn outperforms traditional SGD in scenarios with non-isotropic gradients where (non-private) Adagrad provably outperforms SGD. The major challenge is that the isotropic noise typically added for privacy dominates the signal in gradient geometry for high-dimensional problems; approaches to this that effectively optimize over lower-dimensional subspaces simply ignore the actual problems that varying gradient geometries introduce. In contrast, we study non-isotropic clipping and noise addition, developing a principled theoretical approach; the consequent procedures also enjoy significantly stronger empirical performance than prior approaches.
https://proceedings.mlr.press/v139/asi21a.html
https://proceedings.mlr.press/v139/asi21a.html
https://proceedings.mlr.press/v139/asi21a.html
http://proceedings.mlr.press/v139/asi21a/asi21a.pdf
ICML 2021
Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any $(\epsilon,\delta)$-differentially private optimizer is $\sqrt{\log(d)/n} + \sqrt{d}/\epsilon n.$ The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020) with a new analysis of private regularized mirror descent. It applies to $\ell_p$ bounded domains for $p\in [1,2]$ and queries at most $n^{3/2}$ gradients improving over the best previously known algorithm for the $\ell_2$ case which needs $n^2$ gradients. Further, we show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $\sqrt{\log(d)/n} + (\log(d)/\epsilon n)^{2/3}.$ This bound is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data. We also show that the lower bound in this case is the minimum of the two rates mentioned above.
https://proceedings.mlr.press/v139/asi21b.html
https://proceedings.mlr.press/v139/asi21b.html
https://proceedings.mlr.press/v139/asi21b.html
http://proceedings.mlr.press/v139/asi21b/asi21b.pdf
ICML 2021
Combinatorial Blocking Bandits with Stochastic Delays
Alexia Atsidakou, Orestis Papadigenopoulos, Soumya Basu, Constantine Caramanis, Sanjay Shakkottai
Recent work has considered natural variations of the {\em multi-armed bandit} problem, where the reward distribution of each arm is a special function of the time passed since its last pulling. In this direction, a simple (yet widely applicable) model is that of {\em blocking bandits}, where an arm becomes unavailable for a deterministic number of rounds after each play. In this work, we extend the above model in two directions: (i) We consider the general combinatorial setting where more than one arms can be played at each round, subject to feasibility constraints. (ii) We allow the blocking time of each arm to be stochastic. We first study the computational/unconditional hardness of the above setting and identify the necessary conditions for the problem to become tractable (even in an approximate sense). Based on these conditions, we provide a tight analysis of the approximation guarantee of a natural greedy heuristic that always plays the maximum expected reward feasible subset among the available (non-blocked) arms. When the arms’ expected rewards are unknown, we adapt the above heuristic into a bandit algorithm, based on UCB, for which we provide sublinear (approximate) regret guarantees, matching the theoretical lower bounds in the limiting case of absence of delays.
https://proceedings.mlr.press/v139/atsidakou21a.html
https://proceedings.mlr.press/v139/atsidakou21a.html
https://proceedings.mlr.press/v139/atsidakou21a.html
http://proceedings.mlr.press/v139/atsidakou21a/atsidakou21a.pdf
ICML 2021
Dichotomous Optimistic Search to Quantify Human Perception
Julien Audiffren
In this paper we address a variant of the continuous multi-armed bandits problem, called the threshold estimation problem, which is at the heart of many psychometric experiments. Here, the objective is to estimate the sensitivity threshold for an unknown psychometric function Psi, which is assumed to be non decreasing and continuous. Our algorithm, Dichotomous Optimistic Search (DOS), efficiently solves this task by taking inspiration from hierarchical multi-armed bandits and Black-box optimization. Compared to previous approaches, DOS is model free and only makes minimal assumption on Psi smoothness, while having strong theoretical guarantees that compares favorably to recent methods from both Psychophysics and Global Optimization. We also empirically evaluate DOS and show that it significantly outperforms these methods, both in experiments that mimics the conduct of a psychometric experiment, and in tests with large pulls budgets that illustrate the faster convergence rate.
https://proceedings.mlr.press/v139/audiffren21a.html
https://proceedings.mlr.press/v139/audiffren21a.html
https://proceedings.mlr.press/v139/audiffren21a.html
http://proceedings.mlr.press/v139/audiffren21a/audiffren21a.pdf
ICML 2021
Federated Learning under Arbitrary Communication Patterns
Dmitrii Avdiukhin, Shiva Kasiviswanathan
Federated Learning is a distributed learning setting where the goal is to train a centralized model with training data distributed over a large number of heterogeneous clients, each with unreliable and relatively slow network connections. A common optimization approach used in federated learning is based on the idea of local SGD: each client runs some number of SGD steps locally and then the updated local models are averaged to form the updated global model on the coordinating server. In this paper, we investigate the performance of an asynchronous version of local SGD wherein the clients can communicate with the server at arbitrary time intervals. Our main result shows that for smooth strongly convex and smooth nonconvex functions we achieve convergence rates that match the synchronous version that requires all clients to communicate simultaneously.
https://proceedings.mlr.press/v139/avdiukhin21a.html
https://proceedings.mlr.press/v139/avdiukhin21a.html
https://proceedings.mlr.press/v139/avdiukhin21a.html
http://proceedings.mlr.press/v139/avdiukhin21a/avdiukhin21a.pdf
ICML 2021
Asynchronous Distributed Learning : Adapting to Gradient Delays without Prior Knowledge
Rotem Zamir Aviv, Ido Hakimi, Assaf Schuster, Kfir Yehuda Levy
We consider stochastic convex optimization problems, where several machines act asynchronously in parallel while sharing a common memory. We propose a robust training method for the constrained setting and derive non asymptotic convergence guarantees that do not depend on prior knowledge of update delays, objective smoothness, and gradient variance. Conversely, existing methods for this setting crucially rely on this prior knowledge, which render them unsuitable for essentially all shared-resources computational environments, such as clouds and data centers. Concretely, existing approaches are unable to accommodate changes in the delays which result from dynamic allocation of the machines, while our method implicitly adapts to such changes.
https://proceedings.mlr.press/v139/aviv21a.html
https://proceedings.mlr.press/v139/aviv21a.html
https://proceedings.mlr.press/v139/aviv21a.html
http://proceedings.mlr.press/v139/aviv21a/aviv21a.pdf
ICML 2021
Decomposable Submodular Function Minimization via Maximum Flow
Kyriakos Axiotis, Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, Adrian Vladu
This paper bridges discrete and continuous optimization approaches for decomposable submodular function minimization, in both the standard and parametric settings. We provide improved running times for this problem by reducing it to a number of calls to a maximum flow oracle. When each function in the decomposition acts on O(1) elements of the ground set V and is polynomially bounded, our running time is up to polylogarithmic factors equal to that of solving maximum flow in a sparse graph with O(|V|) vertices and polynomial integral capacities. We achieve this by providing a simple iterative method which can optimize to high precision any convex function defined on the submodular base polytope, provided we can efficiently minimize it on the base polytope corresponding to the cut function of a certain graph that we construct. We solve this minimization problem by lifting the solutions of a parametric cut problem, which we obtain via a new efficient combinatorial reduction to maximum flow. This reduction is of independent interest and implies some previously unknown bounds for the parametric minimum s,t-cut problem in multiple settings.
https://proceedings.mlr.press/v139/axiotis21a.html
https://proceedings.mlr.press/v139/axiotis21a.html
https://proceedings.mlr.press/v139/axiotis21a.html
http://proceedings.mlr.press/v139/axiotis21a/axiotis21a.pdf
ICML 2021
Differentially Private Query Release Through Adaptive Projection
Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit A. Siva
We propose, implement, and evaluate a new algo-rithm for releasing answers to very large numbersof statistical queries likek-way marginals, sub-ject to differential privacy. Our algorithm makesadaptive use of a continuous relaxation of thePro-jection Mechanism, which answers queries on theprivate dataset using simple perturbation, and thenattempts to find the synthetic dataset that mostclosely matches the noisy answers. We use a con-tinuous relaxation of the synthetic dataset domainwhich makes the projection loss differentiable,and allows us to use efficient ML optimizationtechniques and tooling. Rather than answering allqueries up front, we make judicious use of ourprivacy budget by iteratively finding queries forwhich our (relaxed) synthetic data has high error,and then repeating the projection. Randomizedrounding allows us to obtain synthetic data in theoriginal schema. We perform experimental evalu-ations across a range of parameters and datasets,and find that our method outperforms existingalgorithms on large query classes.
https://proceedings.mlr.press/v139/aydore21a.html
https://proceedings.mlr.press/v139/aydore21a.html
https://proceedings.mlr.press/v139/aydore21a.html
http://proceedings.mlr.press/v139/aydore21a/aydore21a.pdf
ICML 2021
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry
Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called “rich regimes”. However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.
https://proceedings.mlr.press/v139/azulay21a.html
https://proceedings.mlr.press/v139/azulay21a.html
https://proceedings.mlr.press/v139/azulay21a.html
http://proceedings.mlr.press/v139/azulay21a/azulay21a.pdf
ICML 2021
On-Off Center-Surround Receptive Fields for Accurate and Robust Image Classification
Zahra Babaiee, Ramin Hasani, Mathias Lechner, Daniela Rus, Radu Grosu
Robustness to variations in lighting conditions is a key objective for any deep vision system. To this end, our paper extends the receptive field of convolutional neural networks with two residual components, ubiquitous in the visual processing system of vertebrates: On-center and off-center pathways, with an excitatory center and inhibitory surround; OOCS for short. The On-center pathway is excited by the presence of a light stimulus in its center, but not in its surround, whereas the Off-center pathway is excited by the absence of a light stimulus in its center, but not in its surround. We design OOCS pathways via a difference of Gaussians, with their variance computed analytically from the size of the receptive fields. OOCS pathways complement each other in their response to light stimuli, ensuring this way a strong edge-detection capability, and as a result an accurate and robust inference under challenging lighting conditions. We provide extensive empirical evidence showing that networks supplied with OOCS pathways gain accuracy and illumination-robustness from the novel edge representation, compared to other baselines.
https://proceedings.mlr.press/v139/babaiee21a.html
https://proceedings.mlr.press/v139/babaiee21a.html
https://proceedings.mlr.press/v139/babaiee21a.html
http://proceedings.mlr.press/v139/babaiee21a/babaiee21a.pdf
ICML 2021
Uniform Convergence, Adversarial Spheres and a Simple Remedy
Gregor Bachmann, Seyed-Mohsen Moosavi-Dezfooli, Thomas Hofmann
Previous work has cast doubt on the general framework of uniform convergence and its ability to explain generalization in neural networks. By considering a specific dataset, it was observed that a neural network completely misclassifies a projection of the training data (adversarial set), rendering any existing generalization bound based on uniform convergence vacuous. We provide an extensive theoretical investigation of the previously studied data setting through the lens of infinitely-wide models. We prove that the Neural Tangent Kernel (NTK) also suffers from the same phenomenon and we uncover its origin. We highlight the important role of the output bias and show theoretically as well as empirically how a sensible choice completely mitigates the problem. We identify sharp phase transitions in the accuracy on the adversarial set and study its dependency on the training sample size. As a result, we are able to characterize critical sample sizes beyond which the effect disappears. Moreover, we study decompositions of a neural network into a clean and noisy part by considering its canonical decomposition into its different eigenfunctions and show empirically that for too small bias the adversarial phenomenon still persists.
https://proceedings.mlr.press/v139/bachmann21a.html
https://proceedings.mlr.press/v139/bachmann21a.html
https://proceedings.mlr.press/v139/bachmann21a.html
http://proceedings.mlr.press/v139/bachmann21a/bachmann21a.pdf
ICML 2021
Faster Kernel Matrix Algebra via Density Estimation
Arturs Backurs, Piotr Indyk, Cameron Musco, Tal Wagner
We study fast algorithms for computing basic properties of an n x n positive semidefinite kernel matrix K corresponding to n points x_1,...,x_n in R^d. In particular, we consider the estimating the sum of kernel matrix entries, along with its top eigenvalue and eigenvector. These are some of the most basic problems defined over kernel matrices. We show that the sum of matrix entries can be estimated up to a multiplicative factor of 1+\epsilon in time sublinear in n and linear in d for many popular kernel functions, including the Gaussian, exponential, and rational quadratic kernels. For these kernels, we also show that the top eigenvalue (and a witnessing approximate eigenvector) can be approximated to a multiplicative factor of 1+\epsilon in time sub-quadratic in n and linear in d. Our algorithms represent significant advances in the best known runtimes for these problems. They leverage the positive definiteness of the kernel matrix, along with a recent line of work on efficient kernel density estimation.
https://proceedings.mlr.press/v139/backurs21a.html
https://proceedings.mlr.press/v139/backurs21a.html
https://proceedings.mlr.press/v139/backurs21a.html
http://proceedings.mlr.press/v139/backurs21a/backurs21a.pdf
ICML 2021
Robust Reinforcement Learning using Least Squares Policy Iteration with Provable Performance Guarantees
Kishan Panaganti Badrinath, Dileep Kalathil
This paper addresses the problem of model-free reinforcement learning for Robust Markov Decision Process (RMDP) with large state spaces. The goal of the RMDPs framework is to find a policy that is robust against the parameter uncertainties due to the mismatch between the simulator model and real-world settings. We first propose the Robust Least Squares Policy Evaluation algorithm, which is a multi-step online model-free learning algorithm for policy evaluation. We prove the convergence of this algorithm using stochastic approximation techniques. We then propose Robust Least Squares Policy Iteration (RLSPI) algorithm for learning the optimal robust policy. We also give a general weighted Euclidean norm bound on the error (closeness to optimality) of the resulting policy. Finally, we demonstrate the performance of our RLSPI algorithm on some benchmark problems from OpenAI Gym.
https://proceedings.mlr.press/v139/badrinath21a.html
https://proceedings.mlr.press/v139/badrinath21a.html
https://proceedings.mlr.press/v139/badrinath21a.html
http://proceedings.mlr.press/v139/badrinath21a/badrinath21a.pdf
ICML 2021
Skill Discovery for Exploration and Planning using Deep Skill Graphs
Akhil Bagaria, Jason K Senthil, George Konidaris
We introduce a new skill-discovery algorithm that builds a discrete graph representation of large continuous MDPs, where nodes correspond to skill subgoals and the edges to skill policies. The agent constructs this graph during an unsupervised training phase where it interleaves discovering skills and planning using them to gain coverage over ever-increasing portions of the state-space. Given a novel goal at test time, the agent plans with the acquired skill graph to reach a nearby state, then switches to learning to reach the goal. We show that the resulting algorithm, Deep Skill Graphs, outperforms both flat and existing hierarchical reinforcement learning methods on four difficult continuous control tasks.
https://proceedings.mlr.press/v139/bagaria21a.html
https://proceedings.mlr.press/v139/bagaria21a.html
https://proceedings.mlr.press/v139/bagaria21a.html
http://proceedings.mlr.press/v139/bagaria21a/bagaria21a.pdf
ICML 2021
Locally Adaptive Label Smoothing Improves Predictive Churn
Dara Bahri, Heinrich Jiang
Training modern neural networks is an inherently noisy process that can lead to high \emph{prediction churn}– disagreements between re-trainings of the same model due to factors such as randomization in the parameter initialization and mini-batches– even when the trained models all attain similar accuracies. Such prediction churn can be very undesirable in practice. In this paper, we present several baselines for reducing churn and show that training on soft labels obtained by adaptively smoothing each example’s label based on the example’s neighboring labels often outperforms the baselines on churn while improving accuracy on a variety of benchmark classification tasks and model architectures.
https://proceedings.mlr.press/v139/bahri21a.html
https://proceedings.mlr.press/v139/bahri21a.html
https://proceedings.mlr.press/v139/bahri21a.html
http://proceedings.mlr.press/v139/bahri21a/bahri21a.pdf
ICML 2021
How Important is the Train-Validation Split in Meta-Learning?
Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason Lee, Sham Kakade, Huan Wang, Caiming Xiong
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from multiple existing tasks. A common practice in meta-learning is to perform a train-validation split (\emph{train-val method}) where the prior adapts to the task on one split of the data, and the resulting predictor is evaluated on another split. Despite its prevalence, the importance of the train-validation split is not well understood either in theory or in practice, particularly in comparison to the more direct \emph{train-train method}, which uses all the per-task data for both training and evaluation. We provide a detailed theoretical study on whether and when the train-validation split is helpful in the linear centroid meta-learning problem. In the agnostic case, we show that the expected loss of the train-val method is minimized at the optimal prior for meta testing, and this is not the case for the train-train method in general without structural assumptions on the data. In contrast, in the realizable case where the data are generated from linear models, we show that both the train-val and train-train losses are minimized at the optimal prior in expectation. Further, perhaps surprisingly, our main result shows that the train-train method achieves a \emph{strictly better} excess loss in this realizable case, even when the regularization parameter and split ratio are optimally tuned for both methods. Our results highlight that sample splitting may not always be preferable, especially when the data is realizable by the model. We validate our theories by experimentally showing that the train-train method can indeed outperform the train-val method, on both simulations and real meta-learning tasks.
https://proceedings.mlr.press/v139/bai21a.html
https://proceedings.mlr.press/v139/bai21a.html
https://proceedings.mlr.press/v139/bai21a.html
http://proceedings.mlr.press/v139/bai21a/bai21a.pdf
ICML 2021
Stabilizing Equilibrium Models by Jacobian Regularization
Shaojie Bai, Vladlen Koltun, Zico Kolter
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single non-linear layer. These models have been shown to achieve performance competitive with the state-of-the-art deep networks while using significantly less memory. Yet they are also slower, brittle to architectural choices, and introduce potential instability to the model. In this paper, we propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models. We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains (e.g., WikiText-103 language modeling and ImageNet classification). Using this method, we demonstrate, for the first time, an implicit-depth model that runs with approximately the same speed and level of performance as popular conventional deep networks such as ResNet-101, while still maintaining the constant memory footprint and architectural simplicity of DEQs. Code is available https://github.com/locuslab/deq.
https://proceedings.mlr.press/v139/bai21b.html
https://proceedings.mlr.press/v139/bai21b.html
https://proceedings.mlr.press/v139/bai21b.html
http://proceedings.mlr.press/v139/bai21b/bai21b.pdf
ICML 2021
Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification
Yu Bai, Song Mei, Huan Wang, Caiming Xiong
Modern machine learning models with high accuracy are often miscalibrated—the predicted top probability does not reflect the actual accuracy, and tends to be \emph{over-confident}. It is commonly believed that such over-confidence is mainly due to \emph{over-parametrization}, in particular when the model is large enough to memorize the training data and maximize the confidence. In this paper, we show theoretically that over-parametrization is not the only reason for over-confidence. We prove that \emph{logistic regression is inherently over-confident}, in the realizable, under-parametrized setting where the data is generated from the logistic model, and the sample size is much larger than the number of parameters. Further, this over-confidence happens for general well-specified binary classification problems as long as the activation is symmetric and concave on the positive part. Perhaps surprisingly, we also show that over-confidence is not always the case—there exists another activation function (and a suitable loss function) under which the learned classifier is \emph{under-confident} at some probability values. Overall, our theory provides a precise characterization of calibration in realizable binary classification, which we verify on simulations and real data experiments.
https://proceedings.mlr.press/v139/bai21c.html
https://proceedings.mlr.press/v139/bai21c.html
https://proceedings.mlr.press/v139/bai21c.html
http://proceedings.mlr.press/v139/bai21c/bai21c.pdf
ICML 2021
Principled Exploration via Optimistic Bootstrapping and Backward Induction
Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang
One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in MNIST maze and Atari suit suggest that OB2I outperforms several state-of-the-art exploration approaches.
https://proceedings.mlr.press/v139/bai21d.html
https://proceedings.mlr.press/v139/bai21d.html
https://proceedings.mlr.press/v139/bai21d.html
http://proceedings.mlr.press/v139/bai21d/bai21d.pdf
ICML 2021
GLSearch: Maximum Common Subgraph Detection via Learning to Search
Yunsheng Bai, Derek Xu, Yizhou Sun, Wei Wang
Detecting the Maximum Common Subgraph (MCS) between two input graphs is fundamental for applications in drug synthesis, malware detection, cloud computing, etc. However, MCS computation is NP-hard, and state-of-the-art MCS solvers rely on heuristic search algorithms which in practice cannot find good solution for large graph pairs given a limited computation budget. We propose GLSearch, a Graph Neural Network (GNN) based learning to search model. Our model is built upon the branch and bound algorithm, which selects one pair of nodes from the two input graphs to expand at a time. We propose a novel GNN-based Deep Q-Network (DQN) to select the node pair, making the search process much faster. Experiments on synthetic and real-world graph pairs demonstrate that our model learns a search strategy that is able to detect significantly larger common subgraphs than existing MCS solvers given the same computation budget. GLSearch can be potentially extended to solve many other combinatorial problems with constraints on graphs.
https://proceedings.mlr.press/v139/bai21e.html
https://proceedings.mlr.press/v139/bai21e.html
https://proceedings.mlr.press/v139/bai21e.html
http://proceedings.mlr.press/v139/bai21e/bai21e.pdf
ICML 2021
Breaking the Limits of Message Passing Graph Neural Networks
Muhammet Balcilar, Pierre Heroux, Benoit Gauzere, Pascal Vasseur, Sebastien Adam, Paul Honeine
Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test and experimentally as powerful as a 3-WL existing models, while remaining spatially localized. Moreover, by designing custom filter functions, outputs can have various frequency components that allow the convolution process to learn different relationships between a given input graph signal and its associated properties. So far, the best 3-WL equivalent graph neural networks have a computational complexity in $\mathcal{O}(n^3)$ with memory usage in $\mathcal{O}(n^2)$, consider non-local update mechanism and do not provide the spectral richness of output profile. The proposed method overcomes all these aforementioned problems and reaches state-of-the-art results in many downstream tasks.
https://proceedings.mlr.press/v139/balcilar21a.html
https://proceedings.mlr.press/v139/balcilar21a.html
https://proceedings.mlr.press/v139/balcilar21a.html
http://proceedings.mlr.press/v139/balcilar21a/balcilar21a.pdf
ICML 2021
Instance Specific Approximations for Submodular Maximization
Eric Balkanski, Sharon Qian, Yaron Singer
The predominant measure for the performance of an algorithm is its worst-case approximation guarantee. While worst-case approximations give desirable robustness guarantees, they can differ significantly from the performance of an algorithm in practice. For the problem of monotone submodular maximization under a cardinality constraint, the greedy algorithm is known to obtain a 1-1/e approximation guarantee, which is optimal for a polynomial-time algorithm. However, very little is known about the approximation achieved by greedy and other submodular maximization algorithms on real instances. We develop an algorithm that gives an instance-specific approximation for any solution of an instance of monotone submodular maximization under a cardinality constraint. This algorithm uses a novel dual approach to submodular maximization. In particular, it relies on the construction of a lower bound to the dual objective that can also be exactly minimized. We use this algorithm to show that on a wide variety of real-world datasets and objectives, greedy and other algorithms find solutions that approximate the optimal solution significantly better than the 1-1/e   0.63 worst-case approximation guarantee, often exceeding 0.9.
https://proceedings.mlr.press/v139/balkanski21a.html
https://proceedings.mlr.press/v139/balkanski21a.html
https://proceedings.mlr.press/v139/balkanski21a.html
http://proceedings.mlr.press/v139/balkanski21a/balkanski21a.pdf
ICML 2021
Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Philip J Ball, Cong Lu, Jack Parker-Holder, Stephen Roberts
Reinforcement learning from large-scale offline datasets provides us with the ability to learn policies without potentially unsafe or impractical exploration. Significant progress has been made in the past few years in dealing with the challenge of correcting for differing behavior between the data collection and learned policies. However, little attention has been paid to potentially changing dynamics when transferring a policy to the online setting, where performance can be up to 90% reduced for existing methods. In this paper we address this problem with Augmented World Models (AugWM). We augment a learned dynamics model with simple transformations that seek to capture potential changes in physical properties of the robot, leading to more robust policies. We not only train our policy in this new setting, but also provide it with the sampled augmentation as a context, allowing it to adapt to changes in the environment. At test time we learn the context in a self-supervised fashion by approximating the augmentation which corresponds to the new environment. We rigorously evaluate our approach on over 100 different changed dynamics settings, and show that this simple approach can significantly improve the zero-shot generalization of a recent state-of-the-art baseline, often achieving successful policies where the baseline fails.
https://proceedings.mlr.press/v139/ball21a.html
https://proceedings.mlr.press/v139/ball21a.html
https://proceedings.mlr.press/v139/ball21a.html
http://proceedings.mlr.press/v139/ball21a/ball21a.pdf
ICML 2021
Regularized Online Allocation Problems: Fairness and Beyond
Santiago Balseiro, Haihao Lu, Vahab Mirrokni
Online allocation problems with resource constraints have a rich history in computer science and operations research. In this paper, we introduce the regularized online allocation problem, a variant that includes a non-linear regularizer acting on the total resource consumption. In this problem, requests repeatedly arrive over time and, for each request, a decision maker needs to take an action that generates a reward and consumes resources. The objective is to simultaneously maximize total rewards and the value of the regularizer subject to the resource constraints. Our primary motivation is the online allocation of internet advertisements wherein firms seek to maximize additive objectives such as the revenue or efficiency of the allocation. By introducing a regularizer, firms can account for the fairness of the allocation or, alternatively, punish under-delivery of advertisements—two common desiderata in internet advertising markets. We design an algorithm when arrivals are drawn independently from a distribution that is unknown to the decision maker. Our algorithm is simple, fast, and attains the optimal order of sub-linear regret compared to the optimal allocation with the benefit of hindsight. Numerical experiments confirm the effectiveness of the proposed algorithm and of the regularizers in an internet advertising application.
https://proceedings.mlr.press/v139/balseiro21a.html
https://proceedings.mlr.press/v139/balseiro21a.html
https://proceedings.mlr.press/v139/balseiro21a.html
http://proceedings.mlr.press/v139/balseiro21a/balseiro21a.pdf
ICML 2021
Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers
Yujia Bao, Shiyu Chang, Regina Barzilay
We propose Predict then Interpolate (PI), a simple algorithm for learning correlations that are stable across environments. The algorithm follows from the intuition that when using a classifier trained on one environment to make predictions on examples from another environment, its mistakes are informative as to which correlations are unstable. In this work, we prove that by interpolating the distributions of the correct predictions and the wrong predictions, we can uncover an oracle distribution where the unstable correlation vanishes. Since the oracle interpolation coefficients are not accessible, we use group distributionally robust optimization to minimize the worst-case risk across all such interpolations. We evaluate our method on both text classification and image classification. Empirical results demonstrate that our algorithm is able to learn robust classifiers (outperforms IRM by 23.85% on synthetic environments and 12.41% on natural environments). Our code and data are available at https://github.com/YujiaBao/ Predict-then-Interpolate.
https://proceedings.mlr.press/v139/bao21a.html
https://proceedings.mlr.press/v139/bao21a.html
https://proceedings.mlr.press/v139/bao21a.html
http://proceedings.mlr.press/v139/bao21a/bao21a.pdf
ICML 2021
Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models
Fan Bao, Kun Xu, Chongxuan Li, Lanqing Hong, Jun Zhu, Bo Zhang
This paper presents new estimates of the score function and its gradient with respect to the model parameters in a general energy-based latent variable model (EBLVM). The score function and its gradient can be expressed as combinations of expectation and covariance terms over the (generally intractable) posterior of the latent variables. New estimates are obtained by introducing a variational posterior to approximate the true posterior in these terms. The variational posterior is trained to minimize a certain divergence (e.g., the KL divergence) between itself and the true posterior. Theoretically, the divergence characterizes upper bounds of the bias of the estimates. In principle, our estimates can be applied to a wide range of objectives, including kernelized Stein discrepancy (KSD), score matching (SM)-based methods and exact Fisher divergence with a minimal model assumption. In particular, these estimates applied to SM-based methods outperform existing methods in learning EBLVMs on several image datasets.
https://proceedings.mlr.press/v139/bao21b.html
https://proceedings.mlr.press/v139/bao21b.html
https://proceedings.mlr.press/v139/bao21b.html
http://proceedings.mlr.press/v139/bao21b/bao21b.pdf
ICML 2021
Compositional Video Synthesis with Action Graphs
Amir Bar, Roei Herzig, Xiaolong Wang, Anna Rohrbach, Gal Chechik, Trevor Darrell, Amir Globerson
Videos of actions are complex signals containing rich compositional structure in space and time. Current video generation methods lack the ability to condition the generation on multiple coordinated and potentially simultaneous timed actions. To address this challenge, we propose to represent the actions in a graph structure called Action Graph and present the new "Action Graph To Video" synthesis task. Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation. We train and evaluate AG2Vid on CATER and Something-Something V2 datasets, which results in videos that have better visual quality and semantic consistency compared to baselines. Finally, our model demonstrates zero-shot abilities by synthesizing novel compositions of the learned actions.
https://proceedings.mlr.press/v139/bar21a.html
https://proceedings.mlr.press/v139/bar21a.html
https://proceedings.mlr.press/v139/bar21a.html
http://proceedings.mlr.press/v139/bar21a/bar21a.pdf
ICML 2021
Approximating a Distribution Using Weight Queries
Nadav Barak, Sivan Sabato
We consider a novel challenge: approximating a distribution without the ability to randomly sample from that distribution. We study how such an approximation can be obtained using *weight queries*. Given some data set of examples, a weight query presents one of the examples to an oracle, which returns the probability, according to the target distribution, of observing examples similar to the presented example. This oracle can represent, for instance, counting queries to a database of the target population, or an interface to a search engine which returns the number of results that match a given search. We propose an interactive algorithm that iteratively selects data set examples and performs corresponding weight queries. The algorithm finds a reweighting of the data set that approximates the weights according to the target distribution, using a limited number of weight queries. We derive an approximation bound on the total variation distance between the reweighting found by the algorithm and the best achievable reweighting. Our algorithm takes inspiration from the UCB approach common in multi-armed bandits problems, and combines it with a new discrepancy estimator and a greedy iterative procedure. In addition to our theoretical guarantees, we demonstrate in experiments the advantages of the proposed algorithm over several baselines. A python implementation of the proposed algorithm and of all the experiments can be found at https://github.com/Nadav-Barak/AWP.
https://proceedings.mlr.press/v139/barak21a.html
https://proceedings.mlr.press/v139/barak21a.html
https://proceedings.mlr.press/v139/barak21a.html
http://proceedings.mlr.press/v139/barak21a/barak21a.pdf
ICML 2021
Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization
Aseem Baranwal, Kimon Fountoulakis, Aukosh Jagannath
Recently there has been increased interest in semi-supervised classification in the presence of graphical information. A new class of learning models has emerged that relies, at its most basic level, on classifying the data after first applying a graph convolution. To understand the merits of this approach, we study the classification of a mixture of Gaussians, where the data corresponds to the node attributes of a stochastic block model. We show that graph convolution extends the regime in which the data is linearly separable by a factor of roughly $1/\sqrt{D}$, where $D$ is the expected degree of a node, as compared to the mixture model data on its own. Furthermore, we find that the linear classifier obtained by minimizing the cross-entropy loss after the graph convolution generalizes to out-of-distribution data where the unseen data can have different intra- and inter-class edge probabilities from the training data.
https://proceedings.mlr.press/v139/baranwal21a.html
https://proceedings.mlr.press/v139/baranwal21a.html
https://proceedings.mlr.press/v139/baranwal21a.html
http://proceedings.mlr.press/v139/baranwal21a/baranwal21a.pdf
ICML 2021
Training Quantized Neural Networks to Global Optimality via Semidefinite Programming
Burak Bartan, Mert Pilanci
Neural networks (NNs) have been extremely successful across many tasks in machine learning. Quantization of NN weights has become an important topic due to its impact on their energy efficiency, inference time and deployment on hardware. Although post-training quantization is well-studied, training optimal quantized NNs involves combinatorial non-convex optimization problems which appear intractable. In this work, we introduce a convex optimization strategy to train quantized NNs with polynomial activations. Our method leverages hidden convexity in two-layer neural networks from the recent literature, semidefinite lifting, and Grothendieck’s identity. Surprisingly, we show that certain quantized NN problems can be solved to global optimality provably in polynomial time in all relevant parameters via tight semidefinite relaxations. We present numerical examples to illustrate the effectiveness of our method.
https://proceedings.mlr.press/v139/bartan21a.html
https://proceedings.mlr.press/v139/bartan21a.html
https://proceedings.mlr.press/v139/bartan21a.html
http://proceedings.mlr.press/v139/bartan21a/bartan21a.pdf
ICML 2021
Beyond $log^2(T)$ regret for decentralized bandits in matching markets
Soumya Basu, Karthik Abinav Sankararaman, Abishek Sankararaman
We design decentralized algorithms for regret minimization in the two sided matching market with one-sided bandit feedback that significantly improves upon the prior works (Liu et al.\,2020a, Sankararaman et al.\,2020, Liu et al.\,2020b). First, for general markets, for any $\varepsilon > 0$, we design an algorithm that achieves a $O(\log^{1+\varepsilon}(T))$ regret to the agent-optimal stable matching, with unknown time horizon $T$, improving upon the $O(\log^{2}(T))$ regret achieved in (Liu et al.\,2020b). Second, we provide the optimal $\Theta(\log(T))$ agent-optimal regret for markets satisfying {\em uniqueness consistency} – markets where leaving participants don’t alter the original stable matching. Previously, $\Theta(\log(T))$ regret was achievable (Sankararaman et al.\,2020, Liu et al.\,2020b) in the much restricted {\em serial dictatorship} setting, when all arms have the same preference over the agents. We propose a phase based algorithm, where in each phase, besides deleting the globally communicated dominated arms the agents locally delete arms with which they collide often. This \emph{local deletion} is pivotal in breaking deadlocks arising from rank heterogeneity of agents across arms. We further demonstrate superiority of our algorithm over existing works through simulations.
https://proceedings.mlr.press/v139/basu21a.html
https://proceedings.mlr.press/v139/basu21a.html
https://proceedings.mlr.press/v139/basu21a.html
http://proceedings.mlr.press/v139/basu21a/basu21a.pdf
ICML 2021
Optimal Thompson Sampling strategies for support-aware CVaR bandits
Dorian Baudry, Romain Gautron, Emilie Kaufmann, Odalric Maillard
In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou & Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.
https://proceedings.mlr.press/v139/baudry21a.html
https://proceedings.mlr.press/v139/baudry21a.html
https://proceedings.mlr.press/v139/baudry21a.html
http://proceedings.mlr.press/v139/baudry21a/baudry21a.pdf
ICML 2021
On Limited-Memory Subsampling Strategies for Bandits
Dorian Baudry, Yoan Russac, Olivier Cappé
There has been a recent surge of interest in non-parametric bandit algorithms based on subsampling. One drawback however of these approaches is the additional complexity required by random subsampling and the storage of the full history of rewards. Our first contribution is to show that a simple deterministic subsampling rule, proposed in the recent work of \citet{baudry2020sub} under the name of “last-block subsampling”, is asymptotically optimal in one-parameter exponential families. In addition, we prove that these guarantees also hold when limiting the algorithm memory to a polylogarithmic function of the time horizon. These findings open up new perspectives, in particular for non-stationary scenarios in which the arm distributions evolve over time. We propose a variant of the algorithm in which only the most recent observations are used for subsampling, achieving optimal regret guarantees under the assumption of a known number of abrupt changes. Extensive numerical simulations highlight the merits of this approach, particularly when the changes are not only affecting the means of the rewards.
https://proceedings.mlr.press/v139/baudry21b.html
https://proceedings.mlr.press/v139/baudry21b.html
https://proceedings.mlr.press/v139/baudry21b.html
http://proceedings.mlr.press/v139/baudry21b/baudry21b.pdf
ICML 2021
Generalized Doubly Reparameterized Gradient Estimators
Matthias Bauer, Andriy Mnih
Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction in variance. Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. We first extend the estimator to hierarchical models with several stochastic layers by showing how to treat additional score function terms due to the hierarchical variational posterior. We then generalize DReGs to score functions of arbitrary distributions instead of just those of the sampling distribution, which makes the estimator applicable to the parameters of the prior in addition to those of the posterior.
https://proceedings.mlr.press/v139/bauer21a.html
https://proceedings.mlr.press/v139/bauer21a.html
https://proceedings.mlr.press/v139/bauer21a.html
http://proceedings.mlr.press/v139/bauer21a/bauer21a.pdf
ICML 2021
Directional Graph Networks
Dominique Beaini, Saro Passaro, Vincent Létourneau, Will Hamilton, Gabriele Corso, Pietro Lió
The lack of anisotropic kernels in graph neural networks (GNNs) strongly limits their expressiveness, contributing to well-known issues such as over-smoothing. To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according to topologicaly-derived directional flows. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then, we propose the use of the Laplacian eigenvectors as such vector field. We show that the method generalizes CNNs on an $n$-dimensional grid and is provably more discriminative than standard GNNs regarding the Weisfeiler-Lehman 1-WL test. We evaluate our method on different standard benchmarks and see a relative error reduction of 8% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset, and a relative increase in precision of 1.6% on the MolPCBA dataset. An important outcome of this work is that it enables graph networks to embed directions in an unsupervised way, thus allowing a better representation of the anisotropic features in different physical or biological problems.
https://proceedings.mlr.press/v139/beaini21a.html
https://proceedings.mlr.press/v139/beaini21a.html
https://proceedings.mlr.press/v139/beaini21a.html
http://proceedings.mlr.press/v139/beaini21a/beaini21a.pdf
ICML 2021
Policy Analysis using Synthetic Controls in Continuous-Time
Alexis Bellot, Mihaela van der Schaar
Counterfactual estimation using synthetic controls is one of the most successful recent methodological developments in causal inference. Despite its popularity, the current description only considers time series aligned across units and synthetic controls expressed as linear combinations of observed control units. We propose a continuous-time alternative that models the latent counterfactual path explicitly using the formalism of controlled differential equations. This model is directly applicable to the general setting of irregularly-aligned multivariate time series and may be optimized in rich function spaces – thereby improving on some limitations of existing approaches.
https://proceedings.mlr.press/v139/bellot21a.html
https://proceedings.mlr.press/v139/bellot21a.html
https://proceedings.mlr.press/v139/bellot21a.html
http://proceedings.mlr.press/v139/bellot21a/bellot21a.pdf
ICML 2021
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory Benton, Wesley Maddox, Sanae Lotfi, Andrew Gordon Gordon Wilson
With a better understanding of the loss surfaces for multilayer networks, we can build more robust and accurate training procedures. Recently it was discovered that independently trained SGD solutions can be connected along one-dimensional paths of near-constant training loss. In this paper, we in fact demonstrate the existence of mode-connecting simplicial complexes that form multi-dimensional manifolds of low loss, connecting many independently trained models. Building on this discovery, we show how to efficiently construct simplicial complexes for fast ensembling, outperforming independently trained deep ensembles in accuracy, calibration, and robustness to dataset shift. Notably, our approach is easy to apply and only requires a few training epochs to discover a low-loss simplex.
https://proceedings.mlr.press/v139/benton21a.html
https://proceedings.mlr.press/v139/benton21a.html
https://proceedings.mlr.press/v139/benton21a.html
http://proceedings.mlr.press/v139/benton21a/benton21a.pdf
ICML 2021
TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer
Berkay Berabi, Jingxuan He, Veselin Raychev, Martin Vechev
The problem of fixing errors in programs has attracted substantial interest over the years. The key challenge for building an effective code fixing tool is to capture a wide range of errors and meanwhile maintain high accuracy. In this paper, we address this challenge and present a new learning-based system, called TFix. TFix works directly on program text and phrases the problem of code fixing as a text-to-text task. In turn, this enables it to leverage a powerful Transformer based model pre-trained on natural language and fine-tuned to generate code fixes (via a large, high-quality dataset obtained from GitHub commits). TFix is not specific to a particular programming language or class of defects and, in fact, improved its precision by simultaneously fine-tuning on 52 different error types reported by a popular static analyzer. Our evaluation on a massive dataset of JavaScript programs shows that TFix is practically effective: it is able to synthesize code that fixes the error in  67 percent of cases and significantly outperforms existing learning-based approaches.
https://proceedings.mlr.press/v139/berabi21a.html
https://proceedings.mlr.press/v139/berabi21a.html
https://proceedings.mlr.press/v139/berabi21a.html
http://proceedings.mlr.press/v139/berabi21a/berabi21a.pdf
ICML 2021
Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis
Jeroen Berrevoets, Ahmed Alaa, Zhaozhi Qian, James Jordon, Alexander E. S. Gimson, Mihaela van der Schaar
Organ transplantation is often the last resort for treating end-stage illnesses, but managing transplant wait-lists is challenging because of organ scarcity and the complexity of assessing donor-recipient compatibility. In this paper, we develop a data-driven model for (real-time) organ allocation using observational data for transplant outcomes. Our model integrates a queuing-theoretic framework with unsupervised learning to cluster the organs into “organ types”, and then construct priority queues (associated with each organ type) wherein incoming patients are assigned. To reason about organ allocations, the model uses synthetic controls to infer a patient’s survival outcomes under counterfactual allocations to the different organ types{–} the model is trained end-to-end to optimise the trade-off between patient waiting time and expected survival time. The usage of synthetic controls enable patient-level interpretations of allocation decisions that can be presented and understood by clinicians. We test our model on multiple data sets, and show that it outperforms other organ-allocation policies in terms of added life-years, and death count. Furthermore, we introduce a novel organ-allocation simulator to accurately test new policies.
https://proceedings.mlr.press/v139/berrevoets21a.html
https://proceedings.mlr.press/v139/berrevoets21a.html
https://proceedings.mlr.press/v139/berrevoets21a.html
http://proceedings.mlr.press/v139/berrevoets21a/berrevoets21a.pdf
ICML 2021
Learning from Biased Data: A Semi-Parametric Approach
Patrice Bertail, Stephan Clémençon, Yannick Guyonvarch, Nathan Noiry
We consider risk minimization problems where the (source) distribution $P_S$ of the training observations $Z_1, \ldots, Z_n$ differs from the (target) distribution $P_T$ involved in the risk that one seeks to minimize. Under the natural assumption that $P_S$ dominates $P_T$, \textit{i.e.} $P_T< \! \! Cite this Paper BibTeX @InProceedings{pmlr-v139-bertail21a, title = {Learning from Biased Data: A Semi-Parametric Approach}, author = {Bertail, Patrice and Cl{\'e}men{\c{c}}on, Stephan and Guyonvarch, Yannick and Noiry, Nathan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {803--812}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/bertail21a/bertail21a.pdf}, url = {https://proceedings.mlr.press/v139/bertail21a.html}, abstract = {We consider risk minimization problems where the (source) distribution $P_S$ of the training observations $Z_1, \ldots, Z_n$ differs from the (target) distribution $P_T$ involved in the risk that one seeks to minimize. Under the natural assumption that $P_S$ dominates $P_T$, \textit{i.e.} $P_T< \! \! Copy to Clipboard Download Endnote %0 Conference Paper %T Learning from Biased Data: A Semi-Parametric Approach %A Patrice Bertail %A Stephan Clémençon %A Yannick Guyonvarch %A Nathan Noiry %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-bertail21a %I PMLR %P 803--812 %U https://proceedings.mlr.press/v139/bertail21a.html %V 139 %X We consider risk minimization problems where the (source) distribution $P_S$ of the training observations $Z_1, \ldots, Z_n$ differs from the (target) distribution $P_T$ involved in the risk that one seeks to minimize. Under the natural assumption that $P_S$ dominates $P_T$, \textit{i.e.} $P_T< \! \! Copy to Clipboard Download APA Bertail, P., Clémençon, S., Guyonvarch, Y. & Noiry, N.. (2021). Learning from Biased Data: A Semi-Parametric Approach. Proceedings of the 38th International Conference on Machine Learning , in Proceedings of Machine Learning Research 139:803-812 Available from https://proceedings.mlr.press/v139/bertail21a.html. Copy to Clipboard Download Related Material Download PDF Supplementary ZIP
https://proceedings.mlr.press/v139/bertail21a.html
https://proceedings.mlr.press/v139/bertail21a.html
https://proceedings.mlr.press/v139/bertail21a.html
http://proceedings.mlr.press/v139/bertail21a/bertail21a.pdf
ICML 2021
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius, Heng Wang, Lorenzo Torresani
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named “TimeSformer,” adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that “divided attention,” where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/facebookresearch/TimeSformer.
https://proceedings.mlr.press/v139/bertasius21a.html
https://proceedings.mlr.press/v139/bertasius21a.html
https://proceedings.mlr.press/v139/bertasius21a.html
http://proceedings.mlr.press/v139/bertasius21a/bertasius21a.pdf
ICML 2021
Confidence Scores Make Instance-dependent Label-noise Learning Possible
Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
In learning with noisy labels, for every instance, its label can randomly walk to other classes following a transition distribution which is named a noise model. Well-studied noise models are all instance-independent, namely, the transition depends only on the original label but not the instance itself, and thus they are less practical in the wild. Fortunately, methods based on instance-dependent noise have been studied, but most of them have to rely on strong assumptions on the noise models. To alleviate this issue, we introduce confidence-scored instance-dependent noise (CSIDN), where each instance-label pair is equipped with a confidence score. We find that with the help of confidence scores, the transition distribution of each instance can be approximately estimated. Similarly to the powerful forward correction for instance-independent noise, we propose a novel instance-level forward correction for CSIDN. We demonstrate the utility and effectiveness of our method through multiple experiments on datasets with synthetic label noise and real-world unknown noise.
https://proceedings.mlr.press/v139/berthon21a.html
https://proceedings.mlr.press/v139/berthon21a.html
https://proceedings.mlr.press/v139/berthon21a.html
http://proceedings.mlr.press/v139/berthon21a/berthon21a.pdf
ICML 2021
Size-Invariant Graph Representations for Graph Classification Extrapolations
Beatrice Bevilacqua, Yangze Zhou, Bruno Ribeiro
In general, graph representation learning methods assume that the train and test data come from the same distribution. In this work we consider an underexplored area of an otherwise rapidly developing field of graph representation learning: The task of out-of-distribution (OOD) graph classification, where train and test data have different distributions, with test data unavailable during training. Our work shows it is possible to use a causal model to learn approximately invariant representations that better extrapolate between train and test data. Finally, we conclude with synthetic and real-world dataset experiments showcasing the benefits of representations that are invariant to train/test distribution shifts.
https://proceedings.mlr.press/v139/bevilacqua21a.html
https://proceedings.mlr.press/v139/bevilacqua21a.html
https://proceedings.mlr.press/v139/bevilacqua21a.html
http://proceedings.mlr.press/v139/bevilacqua21a/bevilacqua21a.pdf
ICML 2021
Principal Bit Analysis: Autoencoding with Schur-Concave Loss
Sourbh Bhadane, Aaron B Wagner, Jayadev Acharya
We consider a linear autoencoder in which the latent variables are quantized, or corrupted by noise, and the constraint is Schur-concave in the set of latent variances. Although finding the optimal encoder/decoder pair for this setup is a nonconvex optimization problem, we show that decomposing the source into its principal components is optimal. If the constraint is strictly Schur-concave and the empirical covariance matrix has only simple eigenvalues, then any optimal encoder/decoder must decompose the source in this way. As one application, we consider a strictly Schur-concave constraint that estimates the number of bits needed to represent the latent variables under fixed-rate encoding, a setup that we call \emph{Principal Bit Analysis (PBA)}. This yields a practical, general-purpose, fixed-rate compressor that outperforms existing algorithms. As a second application, we show that a prototypical autoencoder-based variable-rate compressor is guaranteed to decompose the source into its principal components.
https://proceedings.mlr.press/v139/bhadane21a.html
https://proceedings.mlr.press/v139/bhadane21a.html
https://proceedings.mlr.press/v139/bhadane21a.html
http://proceedings.mlr.press/v139/bhadane21a/bhadane21a.pdf
ICML 2021
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal
Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. Our formulation of the bound as a solution to an optimization problem is general enough to encompass any loss function depending on soft classifier outputs. We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently, allowing us to determine lower bounds for multiple practical datasets of interest. We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets. Finally, we investigate the possibility of using of optimal classification outputs as soft labels to empirically improve robust training.
https://proceedings.mlr.press/v139/bhagoji21a.html
https://proceedings.mlr.press/v139/bhagoji21a.html
https://proceedings.mlr.press/v139/bhagoji21a.html
http://proceedings.mlr.press/v139/bhagoji21a/bhagoji21a.pdf
ICML 2021
Additive Error Guarantees for Weighted Low Rank Approximation
Aditya Bhaskara, Aravinda Kanchana Ruwanpathirana, Maheshakya Wijewardena
Low-rank approximation is a classic tool in data analysis, where the goal is to approximate a matrix $A$ with a low-rank matrix $L$ so as to minimize the error $\norm{A - L}_F^2$. However in many applications, approximating some entries is more important than others, which leads to the weighted low rank approximation problem. However, the addition of weights makes the low-rank approximation problem intractable. Thus many works have obtained efficient algorithms under additional structural assumptions on the weight matrix (such as low rank, and appropriate block structure). We study a natural greedy algorithm for weighted low rank approximation and develop a simple condition under which it yields bi-criteria approximation up to a small additive factor in the error. The algorithm involves iteratively computing the top singular vector of an appropriately varying matrix, and is thus easy to implement at scale. Our methods also allow us to study the problem of low rank approximation under $\ell_p$ norm error.
https://proceedings.mlr.press/v139/bhaskara21a.html
https://proceedings.mlr.press/v139/bhaskara21a.html
https://proceedings.mlr.press/v139/bhaskara21a.html
http://proceedings.mlr.press/v139/bhaskara21a/bhaskara21a.pdf
ICML 2021
Sample Complexity of Robust Linear Classification on Separated Data
Robi Bhattacharjee, Somesh Jha, Kamalika Chaudhuri
We consider the sample complexity of learning with adversarial robustness. Most prior theoretical results for this problem have considered a setting where different classes in the data are close together or overlapping. We consider, in contrast, the well-separated case where there exists a classifier with perfect accuracy and robustness, and show that the sample complexity narrates an entirely different story. Specifically, for linear classifiers, we show a large class of well-separated distributions where the expected robust loss of any algorithm is at least $\Omega(\frac{d}{n})$, whereas the max margin algorithm has expected standard loss $O(\frac{1}{n})$. This shows a gap in the standard and robust losses that cannot be obtained via prior techniques. Additionally, we present an algorithm that, given an instance where the robustness radius is much smaller than the gap between the classes, gives a solution with expected robust loss is $O(\frac{1}{n})$. This shows that for very well-separated data, convergence rates of $O(\frac{1}{n})$ are achievable, which is not the case otherwise. Our results apply to robustness measured in any $\ell_p$ norm with $p > 1$ (including $p = \infty$).
https://proceedings.mlr.press/v139/bhattacharjee21a.html
https://proceedings.mlr.press/v139/bhattacharjee21a.html
https://proceedings.mlr.press/v139/bhattacharjee21a.html
http://proceedings.mlr.press/v139/bhattacharjee21a/bhattacharjee21a.pdf
ICML 2021
Finding k in Latent $k-$ polytope
Chiranjib Bhattacharyya, Ravindran Kannan, Amit Kumar
The recently introduced Latent $k-$ Polytope($\LkP$) encompasses several stochastic Mixed Membership models including Topic Models. The problem of finding $k$, the number of extreme points of $\LkP$, is a fundamental challenge and includes several important open problems such as determination of number of components in Ad-mixtures. This paper addresses this challenge by introducing Interpolative Convex Rank(\INR) of a matrix defined as the minimum number of its columns whose convex hull is within Hausdorff distance $\varepsilon$ of the convex hull of all columns. The first important contribution of this paper is to show that under \emph{standard assumptions} $k$ equals the \INR of a \emph{subset smoothed data matrix} defined from Data generated from an $\LkP$. The second important contribution of the paper is a polynomial time algorithm for finding $k$ under standard assumptions. An immediate corollary is the first polynomial time algorithm for finding the \emph{inner dimension} in Non-negative matrix factorisation(NMF) with assumptions which are qualitatively different than existing ones such as \emph{Separability}. %An immediate corollary is the first polynomial time algorithm for finding the \emph{inner dimension} in Non-negative matrix factorisation(NMF) with assumptions considerably weaker than \emph{Separability}.
https://proceedings.mlr.press/v139/bhattacharyya21a.html
https://proceedings.mlr.press/v139/bhattacharyya21a.html
https://proceedings.mlr.press/v139/bhattacharyya21a.html
http://proceedings.mlr.press/v139/bhattacharyya21a/bhattacharyya21a.pdf
ICML 2021
Non-Autoregressive Electron Redistribution Modeling for Reaction Prediction
Hangrui Bi, Hengyi Wang, Chence Shi, Connor Coley, Jian Tang, Hongyu Guo
Reliably predicting the products of chemical reactions presents a fundamental challenge in synthetic chemistry. Existing machine learning approaches typically produce a reaction product by sequentially forming its subparts or intermediate molecules. Such autoregressive methods, however, not only require a pre-defined order for the incremental construction but preclude the use of parallel decoding for efficient computation. To address these issues, we devise a non-autoregressive learning paradigm that predicts reaction in one shot. Leveraging the fact that chemical reactions can be described as a redistribution of electrons in molecules, we formulate a reaction as an arbitrary electron flow and predict it with a novel multi-pointer decoding network. Experiments on the USPTO-MIT dataset show that our approach has established a new state-of-the-art top-1 accuracy and achieves at least 27 times inference speedup over the state-of-the-art methods. Also, our predictions are easier for chemists to interpret owing to predicting the electron flows.
https://proceedings.mlr.press/v139/bi21a.html
https://proceedings.mlr.press/v139/bi21a.html
https://proceedings.mlr.press/v139/bi21a.html
http://proceedings.mlr.press/v139/bi21a/bi21a.pdf
ICML 2021
TempoRL: Learning When to Act
André Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer
Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment. However, behaviours are usually learned in a purely reactive fashion, where an appropriate action is selected based on an observation. In this form, it is challenging to learn when it is necessary to execute new decisions. This makes learning inefficient especially in environments that need various degrees of fine and coarse control. To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. Our TempoRL approach introduces skip connections between states and learns a skip-policy for repeating the same action along these skips. We demonstrate the effectiveness of TempoRL on a variety of traditional and deep RL environments, showing that our approach is capable of learning successful policies up to an order of magnitude faster than vanilla Q-learning.
https://proceedings.mlr.press/v139/biedenkapp21a.html
https://proceedings.mlr.press/v139/biedenkapp21a.html
https://proceedings.mlr.press/v139/biedenkapp21a.html
http://proceedings.mlr.press/v139/biedenkapp21a/biedenkapp21a.pdf
ICML 2021
Follow-the-Regularized-Leader Routes to Chaos in Routing Games
Jakub Bielawski, Thiparat Chotibut, Fryderyk Falniowski, Grzegorz Kosiorowski, Michał Misiurewicz, Georgios Piliouras
We study the emergence of chaotic behavior of Follow-the-Regularized Leader (FoReL) dynamics in games. We focus on the effects of increasing the population size or the scale of costs in congestion games, and generalize recent results on unstable, chaotic behaviors in the Multiplicative Weights Update dynamics to a much larger class of FoReL dynamics. We establish that, even in simple linear non-atomic congestion games with two parallel links and \emph{any} fixed learning rate, unless the game is fully symmetric, increasing the population size or the scale of costs causes learning dynamics to becomes unstable and eventually chaotic, in the sense of Li-Yorke and positive topological entropy. Furthermore, we prove the existence of novel non-standard phenomena such as the coexistence of stable Nash equilibria and chaos in the same game. We also observe the simultaneous creation of a chaotic attractor as another chaotic attractor gets destroyed. Lastly, although FoReL dynamics can be strange and non-equilibrating, we prove that the time average still converges to an \emph{exact} equilibrium for any choice of learning rate and any scale of costs.
https://proceedings.mlr.press/v139/bielawski21a.html
https://proceedings.mlr.press/v139/bielawski21a.html
https://proceedings.mlr.press/v139/bielawski21a.html
http://proceedings.mlr.press/v139/bielawski21a/bielawski21a.pdf
ICML 2021
Neural Symbolic Regression that scales
Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, Giambattista Parascandolo
Symbolic equations are at the core of scientific discovery. The task of discovering the underlying equation from a set of input-output pairs is called symbolic regression. Traditionally, symbolic regression methods use hand-designed strategies that do not improve with experience. In this paper, we introduce the first symbolic regression method that leverages large scale pre-training. We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs. At test time, we query the model on a new set of points and use its output to guide the search for the equation. We show empirically that this approach can re-discover a set of well-known physical equations, and that it improves over time with more data and compute.
https://proceedings.mlr.press/v139/biggio21a.html
https://proceedings.mlr.press/v139/biggio21a.html
https://proceedings.mlr.press/v139/biggio21a.html
http://proceedings.mlr.press/v139/biggio21a/biggio21a.pdf
ICML 2021
Model Distillation for Revenue Optimization: Interpretable Personalized Pricing
Max Biggs, Wei Sun, Markus Ettl
Data-driven pricing strategies are becoming increasingly common, where customers are offered a personalized price based on features that are predictive of their valuation of a product. It is desirable for this pricing policy to be simple and interpretable, so it can be verified, checked for fairness, and easily implemented. However, efforts to incorporate machine learning into a pricing framework often lead to complex pricing policies that are not interpretable, resulting in slow adoption in practice. We present a novel, customized, prescriptive tree-based algorithm that distills knowledge from a complex black-box machine learning algorithm, segments customers with similar valuations and prescribes prices in such a way that maximizes revenue while maintaining interpretability. We quantify the regret of a resulting policy and demonstrate its efficacy in applications with both synthetic and real-world datasets.
https://proceedings.mlr.press/v139/biggs21a.html
https://proceedings.mlr.press/v139/biggs21a.html
https://proceedings.mlr.press/v139/biggs21a.html
http://proceedings.mlr.press/v139/biggs21a/biggs21a.pdf
ICML 2021
Scalable Normalizing Flows for Permutation Invariant Densities
Marin Biloš, Stephan Günnemann
Modeling sets is an important problem in machine learning since this type of data can be found in many domains. A promising approach defines a family of permutation invariant densities with continuous normalizing flows. This allows us to maximize the likelihood directly and sample new realizations with ease. In this work, we demonstrate how calculating the trace, a crucial step in this method, raises issues that occur both during training and inference, limiting its practicality. We propose an alternative way of defining permutation equivariant transformations that give closed form trace. This leads not only to improvements while training, but also to better final performance. We demonstrate the benefits of our approach on point processes and general set modeling.
https://proceedings.mlr.press/v139/bilos21a.html
https://proceedings.mlr.press/v139/bilos21a.html
https://proceedings.mlr.press/v139/bilos21a.html
http://proceedings.mlr.press/v139/bilos21a/bilos21a.pdf
ICML 2021
Online Learning for Load Balancing of Unknown Monotone Resource Allocation Games
Ilai Bistritz, Nicholas Bambos
Consider N players that each uses a mixture of K resources. Each of the players’ reward functions includes a linear pricing term for each resource that is controlled by the game manager. We assume that the game is strongly monotone, so if each player runs gradient descent, the dynamics converge to a unique Nash equilibrium (NE). Unfortunately, this NE can be inefficient since the total load on a given resource can be very high. In principle, we can control the total loads by tuning the coefficients of the pricing terms. However, finding pricing coefficients that balance the loads requires knowing the players’ reward functions and their action sets. Obtaining this game structure information is infeasible in a large-scale network and violates the users’ privacy. To overcome this, we propose a simple algorithm that learns to shift the NE of the game to meet the total load constraints by adjusting the pricing coefficients in an online manner. Our algorithm only requires the total load per resource as feedback and does not need to know the reward functions or the action sets. We prove that our algorithm guarantees convergence in L2 to a NE that meets target total load constraints. Simulations show the effectiveness of our approach when applied to smart grid demand-side management or power control in wireless networks.
https://proceedings.mlr.press/v139/bistritz21a.html
https://proceedings.mlr.press/v139/bistritz21a.html
https://proceedings.mlr.press/v139/bistritz21a.html
http://proceedings.mlr.press/v139/bistritz21a/bistritz21a.pdf
ICML 2021
Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half Precision
Johan Björck, Xiangyu Chen, Christopher De Sa, Carla P Gomes, Kilian Weinberger
Low-precision training has become a popular approach to reduce compute requirements, memory footprint, and energy consumption in supervised learning. In contrast, this promising approach has not yet enjoyed similarly widespread adoption within the reinforcement learning (RL) community, partly because RL agents can be notoriously hard to train even in full precision. In this paper we consider continuous control with the state-of-the-art SAC agent and demonstrate that a naïve adaptation of low-precision methods from supervised learning fails. We propose a set of six modifications, all straightforward to implement, that leaves the underlying agent and its hyperparameters unchanged but improves the numerical stability dramatically. The resulting modified SAC agent has lower memory and compute requirements while matching full-precision rewards, demonstrating that low-precision training can substantially accelerate state-of-the-art RL without parameter tuning.
https://proceedings.mlr.press/v139/bjorck21a.html
https://proceedings.mlr.press/v139/bjorck21a.html
https://proceedings.mlr.press/v139/bjorck21a.html
http://proceedings.mlr.press/v139/bjorck21a/bjorck21a.pdf
ICML 2021
Multiplying Matrices Without Multiplying
Davis Blalock, John Guttag
Multiplying matrices is among the most fundamental and most computationally demanding operations in machine learning and scientific computing. Consequently, the task of efficiently approximating matrix products has received significant attention. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs 10x faster than alternatives at a given level of error, as well as 100x faster than exact matrix multiplication. In the common case that one matrix is known ahead of time, our method also has the interesting property that it requires zero multiply-adds. These results suggest that a mixture of hashing, averaging, and byte shuffling{—}the core operations of our method{—}could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment.
https://proceedings.mlr.press/v139/blalock21a.html
https://proceedings.mlr.press/v139/blalock21a.html
https://proceedings.mlr.press/v139/blalock21a.html
http://proceedings.mlr.press/v139/blalock21a/blalock21a.pdf
ICML 2021
One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning
Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents’ incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents’ incentives.
https://proceedings.mlr.press/v139/blum21a.html
https://proceedings.mlr.press/v139/blum21a.html
https://proceedings.mlr.press/v139/blum21a.html
http://proceedings.mlr.press/v139/blum21a/blum21a.pdf
ICML 2021
Black-box density function estimation using recursive partitioning
Erik Bodin, Zhenwen Dai, Neill Campbell, Carl Henrik Ek
We present a novel approach to Bayesian inference and general Bayesian computation that is defined through a sequential decision loop. Our method defines a recursive partitioning of the sample space. It neither relies on gradients nor requires any problem-specific tuning, and is asymptotically exact for any density function with a bounded domain. The output is an approximation to the whole density function including the normalisation constant, via partitions organised in efficient data structures. Such approximations may be used for evidence estimation or fast posterior sampling, but also as building blocks to treat a larger class of estimation problems. The algorithm shows competitive performance to recent state-of-the-art methods on synthetic and real-world problems including parameter inference for gravitational-wave physics.
https://proceedings.mlr.press/v139/bodin21a.html
https://proceedings.mlr.press/v139/bodin21a.html
https://proceedings.mlr.press/v139/bodin21a.html
http://proceedings.mlr.press/v139/bodin21a/bodin21a.pdf
ICML 2021
Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks
Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lió, Michael Bronstein
The pairwise interaction paradigm of graph machine learning has predominantly governed the modelling of relational systems. However, graphs alone cannot capture the multi-level interactions present in many complex systems and the expressive power of such schemes was proven to be limited. To overcome these limitations, we propose Message Passing Simplicial Networks (MPSNs), a class of models that perform message passing on simplicial complexes (SCs). To theoretically analyse the expressivity of our model we introduce a Simplicial Weisfeiler-Lehman (SWL) colouring procedure for distinguishing non-isomorphic SCs. We relate the power of SWL to the problem of distinguishing non-isomorphic graphs and show that SWL and MPSNs are strictly more powerful than the WL test and not less powerful than the 3-WL test. We deepen the analysis by comparing our model with traditional graph neural networks (GNNs) with ReLU activations in terms of the number of linear regions of the functions they can represent. We empirically support our theoretical claims by showing that MPSNs can distinguish challenging strongly regular graphs for which GNNs fail and, when equipped with orientation equivariant layers, they can improve classification accuracy in oriented SCs compared to a GNN baseline.
https://proceedings.mlr.press/v139/bodnar21a.html
https://proceedings.mlr.press/v139/bodnar21a.html
https://proceedings.mlr.press/v139/bodnar21a.html
http://proceedings.mlr.press/v139/bodnar21a/bodnar21a.pdf
ICML 2021
The Hintons in your Neural Network: a Quantum Field Theory View of Deep Learning
Roberto Bondesan, Max Welling
In this work we develop a quantum field theory formalism for deep learning, where input signals are encoded in Gaussian states, a generalization of Gaussian processes which encode the agent’s uncertainty about the input signal. We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles, dubbed “Hintons”. On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing, and provides quantum deformations of neural networks that can be run efficiently on those devices. Finally, we discuss a semi-classical limit of the quantum deformed models which is amenable to classical simulation.
https://proceedings.mlr.press/v139/bondesan21a.html
https://proceedings.mlr.press/v139/bondesan21a.html
https://proceedings.mlr.press/v139/bondesan21a.html
http://proceedings.mlr.press/v139/bondesan21a/bondesan21a.pdf
ICML 2021
Offline Contextual Bandits with Overparameterized Models
David Brandfonbrener, William Whitney, Rajesh Ranganath, Joan Bruna
Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as overparameterized supervised learning, but policy-based algorithms do not. We show that this discrepancy is due to the \emph{action-stability} of their objectives. An objective is action-stable if there exists a prediction (action-value vector or action distribution) which is optimal no matter which action is observed. While value-based objectives are action-stable, policy-based objectives are unstable. We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. In our experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.
https://proceedings.mlr.press/v139/brandfonbrener21a.html
https://proceedings.mlr.press/v139/brandfonbrener21a.html
https://proceedings.mlr.press/v139/brandfonbrener21a.html
http://proceedings.mlr.press/v139/brandfonbrener21a/brandfonbrener21a.pdf
ICML 2021
High-Performance Large-Scale Image Recognition Without Normalization
Andy Brock, Soham De, Samuel L Smith, Karen Simonyan
Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the test accuracies of the best batch-normalized networks, and are often unstable for large learning rates or strong data augmentations. In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on ImageNet while being up to 8.7x faster to train, and our largest models attain a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free models attain significantly better performance than their batch-normalized counterparts when fine-tuning on ImageNet after large-scale pre-training on a dataset of 300 million labeled images, with our best models obtaining an accuracy of 89.2%.
https://proceedings.mlr.press/v139/brock21a.html
https://proceedings.mlr.press/v139/brock21a.html
https://proceedings.mlr.press/v139/brock21a.html
http://proceedings.mlr.press/v139/brock21a/brock21a.pdf
ICML 2021
Evaluating the Implicit Midpoint Integrator for Riemannian Hamiltonian Monte Carlo
James Brofos, Roy R Lederman
Riemannian manifold Hamiltonian Monte Carlo is traditionally carried out using the generalized leapfrog integrator. However, this integrator is not the only choice and other integrators yielding valid Markov chain transition operators may be considered. In this work, we examine the implicit midpoint integrator as an alternative to the generalized leapfrog integrator. We discuss advantages and disadvantages of the implicit midpoint integrator for Hamiltonian Monte Carlo, its theoretical properties, and an empirical assessment of the critical attributes of such an integrator for Hamiltonian Monte Carlo: energy conservation, volume preservation, and reversibility. Empirically, we find that while leapfrog iterations are faster, the implicit midpoint integrator has better energy conservation, leading to higher acceptance rates, as well as better conservation of volume and better reversibility, arguably yielding a more accurate sampling procedure.
https://proceedings.mlr.press/v139/brofos21a.html
https://proceedings.mlr.press/v139/brofos21a.html
https://proceedings.mlr.press/v139/brofos21a.html
http://proceedings.mlr.press/v139/brofos21a/brofos21a.pdf
ICML 2021
Reinforcement Learning of Implicit and Explicit Control Flow Instructions
Ethan Brooks, Janarthanan Rajendran, Richard L Lewis, Satinder Singh
Learning to flexibly follow task instructions in dynamic environments poses interesting challenges for reinforcement learning agents. We focus here on the problem of learning control flow that deviates from a strict step-by-step execution of instructions{—}that is, control flow that may skip forward over parts of the instructions or return backward to previously completed or skipped steps. Demand for such flexible control arises in two fundamental ways: explicitly when control is specified in the instructions themselves (such as conditional branching and looping) and implicitly when stochastic environment dynamics require re-completion of instructions whose effects have been perturbed, or opportunistic skipping of instructions whose effects are already present. We formulate an attention-based architecture that meets these challenges by learning, from task reward only, to flexibly attend to and condition behavior on an internal encoding of the instructions. We test the architecture’s ability to learn both explicit and implicit control in two illustrative domains—one inspired by Minecraft and the other by StarCraft—and show that the architecture exhibits zero-shot generalization to novel instructions of length greater than those in a training set, at a performance level unmatched by three baseline recurrent architectures and one ablation architecture.
https://proceedings.mlr.press/v139/brooks21a.html
https://proceedings.mlr.press/v139/brooks21a.html
https://proceedings.mlr.press/v139/brooks21a.html
http://proceedings.mlr.press/v139/brooks21a/brooks21a.pdf
ICML 2021
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
10