abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/feofanov23a.html | https://proceedings.mlr.press/v202/feofanov23a/feofanov23a.pdf | https://openreview.net/forum?id=vlGOVmL8uI | Random Matrix Analysis to Balance between Supervised and Unsupervised Learning under the Low Density Separation Assumption | https://proceedings.mlr.press/v202/feofanov23a.html | Vasilii Feofanov, Malik Tiomoko, Aladin Virmaux | https://proceedings.mlr.press/v202/feofanov23a.html | ICML 2023 | We propose a theoretical framework to analyze semi-supervised classification under the low density separation assumption in a high-dimensional regime. In particular, we introduce QLDS, a linear classification model, where the low density separation assumption is implemented via quadratic margin maximization. The algorithm has an explicit solution with rich theoretical properties, and we show that particular cases of our algorithm are the least-square support vector machine in the supervised case, the spectral clustering in the fully unsupervised regime, and a class of semi-supervised graph-based approaches. As such, QLDS establishes a smooth bridge between these supervised and unsupervised learning methods. Using recent advances in the random matrix theory, we formally derive a theoretical evaluation of the classification error in the asymptotic regime. As an application, we derive a hyperparameter selection policy that finds the best balance between the supervised and the unsupervised terms of our learning criterion. Finally, we provide extensive illustrations of our framework, as well as an experimental study on several benchmarks to demonstrate that QLDS, while being computationally more efficient, improves over cross-validation for hyperparameter selection, indicating a high promise of the usage of random matrix theory for semi-supervised model selection. |
https://proceedings.mlr.press/v202/ferber23a.html | https://proceedings.mlr.press/v202/ferber23a/ferber23a.pdf | https://openreview.net/forum?id=sSwN4NrzZr | SurCo: Learning Linear SURrogates for COmbinatorial Nonlinear Optimization Problems | https://proceedings.mlr.press/v202/ferber23a.html | Aaron M Ferber, Taoan Huang, Daochen Zha, Martin Schubert, Benoit Steiner, Bistra Dilkina, Yuandong Tian | https://proceedings.mlr.press/v202/ferber23a.html | ICML 2023 | Optimization problems with nonlinear cost functions and combinatorial constraints appear in many real-world applications but remain challenging to solve efficiently compared to their linear counterparts. To bridge this gap, we propose $\textbf{\emph{\texttt{SurCo}}}$ that learns linear $\underline{\text{Sur}}$rogate costs which can be used in existing $\underline{\text{Co}}$mbinatorial solvers to output good solutions to the original nonlinear combinatorial optimization problem. The surrogate costs are learned end-to-end with nonlinear loss by differentiating through the linear surrogate solver, combining the flexibility of gradient-based methods with the structure of linear combinatorial optimization. We propose three $\texttt{SurCo}$ variants: $\texttt{SurCo}-\texttt{zero}$ for individual nonlinear problems, $\texttt{SurCo}-\texttt{prior}$ for problem distributions, and $\texttt{SurCo}-\texttt{hybrid}$ to combine both distribution and problem-specific information. We give theoretical intuition motivating $\texttt{SurCo}$, and evaluate it empirically. Experiments show that $\texttt{SurCo}$ finds better solutions faster than state-of-the-art and domain expert approaches in real-world optimization problems such as embedding table sharding, inverse photonic design, and nonlinear route planning. |
https://proceedings.mlr.press/v202/fernandes23a.html | https://proceedings.mlr.press/v202/fernandes23a/fernandes23a.pdf | https://openreview.net/forum?id=SVCYSBgFIr | Scaling Laws for Multilingual Neural Machine Translation | https://proceedings.mlr.press/v202/fernandes23a.html | Patrick Fernandes, Behrooz Ghorbani, Xavier Garcia, Markus Freitag, Orhan Firat | https://proceedings.mlr.press/v202/fernandes23a.html | ICML 2023 | In this work, we provide a large-scale empirical study of the scaling properties of multilingual neural machine translation models. We examine how increases in the model size affect the model performance and investigate the role of the individual language pair weights on the scaling behavior. We find that these weights only affect the multiplicative factor of the scaling law, and in particular, the scaling exponent is unaffected by them. Through a novel joint scaling law formulation, we compute the effective number of parameters allocated to each language pair and examine the role of language similarity in the scaling behavior of our models. We find little evidence that language similarity has any impact. In contrast, “direction” of the multilinguality plays a significant role, with models translating from multiple languages into English having a larger number of effective parameters per task than their reversed counterparts. Finally, we leverage our observations to predict the performance of multilingual models trained with any language weighting at any scale, greatly reducing efforts required for language balancing in large multilingual models. Our findings apply to both in-domain and out-of-domain test sets and to multiple evaluation metrics, such as ChrF and BLEURT. |
https://proceedings.mlr.press/v202/fichtenberger23a.html | https://proceedings.mlr.press/v202/fichtenberger23a/fichtenberger23a.pdf | https://openreview.net/forum?id=Xqedp0Iu1S | Constant Matters: Fine-grained Error Bound on Differentially Private Continual Observation | https://proceedings.mlr.press/v202/fichtenberger23a.html | Hendrik Fichtenberger, Monika Henzinger, Jalaj Upadhyay | https://proceedings.mlr.press/v202/fichtenberger23a.html | ICML 2023 | We study fine-grained error bounds for differentially private algorithms for counting under continual observation. Our main insight is that the matrix mechanism when using lower-triangular matrices can be used in the continual observation model. More specifically, we give an explicit factorization for the counting matrix $M_\mathsf{count}$ and upper bound the error explicitly. We also give a fine-grained analysis, specifying the exact constant in the upper bound. Our analysis is based on upper and lower bounds of the completely bounded norm (cb-norm) of $M_\mathsf{count}$. Along the way, we improve the best-known bound of 28 years by Mathias (SIAM Journal on Matrix Analysis and Applications, 1993) on the cb-norm of $M_\mathsf{count}$ for a large range of the dimension of $M_\mathsf{count}$. Furthermore, we are the first to give concrete error bounds for various problems under continual observation such as binary counting, maintaining a histogram, releasing an approximately cut-preserving synthetic graph, many graph-based statistics, and substring and episode counting. Finally, we note that our result can be used to get a fine-grained error bound for non-interactive local learning and the first lower bounds on the additive error for $(\epsilon,\delta)$-differentially-private counting under continual observation. Subsequent to this work, Henzinger et al. (SODA, 2023) showed that our factorization also achieves fine-grained mean-squared error. |
https://proceedings.mlr.press/v202/fiegel23a.html | https://proceedings.mlr.press/v202/fiegel23a/fiegel23a.pdf | https://openreview.net/forum?id=O1j4uFuSVW | Adapting to game trees in zero-sum imperfect information games | https://proceedings.mlr.press/v202/fiegel23a.html | Côme Fiegel, Pierre Menard, Tadashi Kozuno, Remi Munos, Vianney Perchet, Michal Valko | https://proceedings.mlr.press/v202/fiegel23a.html | ICML 2023 | Imperfect information games (IIG) are games in which each player only partially observes the current game state. We study how to learn $\epsilon$-optimal strategies in a zero-sum IIG through self-play with trajectory feedback. We give a problem-independent lower bound $\widetilde{\mathcal{O}}(H(A_{\mathcal{X}}+B_{\mathcal{Y}})/\epsilon^2)$ on the required number of realizations to learn these strategies with high probability, where $H$ is the length of the game, $A_{\mathcal{X}}$ and $B_{\mathcal{Y}}$ are the total number of actions for the two players. We also propose two Follow the Regularized leader (FTRL) algorithms for this setting: Balanced FTRL which matches this lower bound, but requires the knowledge of the information set structure beforehand to define the regularization; and Adaptive FTRL which needs $\widetilde{\mathcal{O}}(H^2(A_{\mathcal{X}}+B_{\mathcal{Y}})/\epsilon^2)$ realizations without this requirement by progressively adapting the regularization to the observations. |
https://proceedings.mlr.press/v202/finzi23a.html | https://proceedings.mlr.press/v202/finzi23a/finzi23a.pdf | https://openreview.net/forum?id=sdhcjMzhHN | User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems | https://proceedings.mlr.press/v202/finzi23a.html | Marc Anton Finzi, Anudhyan Boral, Andrew Gordon Wilson, Fei Sha, Leonardo Zepeda-Nunez | https://proceedings.mlr.press/v202/finzi23a.html | ICML 2023 | Diffusion models are a class of probabilistic generative models that have been widely used as a prior for image processing tasks like text conditional generation and inpainting. We demonstrate that these models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems. In these applications, diffusion models can implicitly represent knowledge about outliers and extreme events; however, querying that knowledge through conditional sampling or measuring probabilities is surprisingly difficult. Existing methods for conditional sampling at inference time seek mainly to enforce the constraints, which is insufficient to match the statistics of the distribution or compute the probability of the chosen events. To achieve these ends, optimally one would use the conditional score function, but its computation is typically intractable. In this work, we develop a probabilistic approximation scheme for the conditional score function which provably converges to the true distribution as the noise level decreases. With this scheme we are able to sample conditionally on nonlinear user-defined events at inference time, and matches data statistics even when sampling from the tails of the distribution. |
https://proceedings.mlr.press/v202/fontanella23a.html | https://proceedings.mlr.press/v202/fontanella23a/fontanella23a.pdf | https://openreview.net/forum?id=yrVIUwRtzy | ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging | https://proceedings.mlr.press/v202/fontanella23a.html | Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw, Grant Mair, Emanuele Trucco, Amos Storkey | https://proceedings.mlr.press/v202/fontanella23a.html | ICML 2023 | In some medical imaging tasks and other settings where only small parts of the image are informative for the classification task, traditional CNNs can sometimes struggle to generalise. Manually annotated Regions of Interest (ROI) are often used to isolate the most informative parts of the image. However, these are expensive to collect and may vary significantly across annotators. To overcome these issues, we propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales. We refer to our method as Adversarial Counterfactual Attention (ACAT). ACAT increases the baseline classification accuracy of lesions in brain CT scans from $71.39 %$ to $72.55 %$ and of COVID-19 related findings in lung CT scans from $67.71 %$ to $70.84 %$ and exceeds the performance of competing methods. We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images. They are able to isolate the area of interest in brain and lung CT scans without using any manual annotations. In the task of localising the lesion location out of 6 possible regions, they obtain a score of $65.05 %$ on brain CT scans, improving the score of $61.29 %$ obtained with the best competing method. |
https://proceedings.mlr.press/v202/forel23a.html | https://proceedings.mlr.press/v202/forel23a/forel23a.pdf | https://openreview.net/forum?id=4Lk9GHHueJ | Explainable Data-Driven Optimization: From Context to Decision and Back Again | https://proceedings.mlr.press/v202/forel23a.html | Alexandre Forel, Axel Parmentier, Thibaut Vidal | https://proceedings.mlr.press/v202/forel23a.html | ICML 2023 | Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters. While a vast body of work is dedicated to interpreting machine learning models in the classification setting, explaining decision pipelines involving learning algorithms remains unaddressed. This lack of interpretability can block the adoption of data-driven solutions as practitioners may not understand or trust the recommended decisions. We bridge this gap by introducing a counterfactual explanation methodology tailored to explain solutions to data-driven problems. We introduce two classes of explanations and develop methods to find nearest explanations of random forest and nearest-neighbor predictors. We demonstrate our approach by explaining key problems in operations management such as inventory management and routing. |
https://proceedings.mlr.press/v202/foster23a.html | https://proceedings.mlr.press/v202/foster23a/foster23a.pdf | https://openreview.net/forum?id=8gOvb9PoPC | Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games | https://proceedings.mlr.press/v202/foster23a.html | Dylan J Foster, Noah Golowich, Sham M. Kakade | https://proceedings.mlr.press/v202/foster23a.html | ICML 2023 | We consider the problem of decentralized multi-agent reinforcement learning in Markov games. A fundamental question is whether there exist algorithms that, when run independently by all agents, lead to no-regret for each player, analogous to celebrated convergence results for no-regret learning in normal-form games. While recent work has shown that such algorithms exist for restricted settings (notably, when regret is defined with respect to deviations to Markov policies), the question of whether independent no-regret learning can be achieved in the standard Markov game framework was open. We provide a decisive negative resolution to this problem, both from a computational and statistical perspective. We show that: • Under the complexity-theoretic assumption that PPAD $\neq$ P, there is no polynomial-time algorithm that attains no-regret in two-player general-sum Markov games when executed independently by all players, even when the game is known to the algorithm designer. • When the game is unknown, no algorithm, efficient or otherwise, can achieve no-regret without observing exponentially many episodes in the number of players. These results are proven via lower bounds for a simpler problem we refer to as SparseCCE, in which the goal is to compute a coarse correlated equilibrium that is “sparse” in the sense that it can be represented as a mixture of a small number of product policies. |
https://proceedings.mlr.press/v202/fotiadis23a.html | https://proceedings.mlr.press/v202/fotiadis23a/fotiadis23a.pdf | https://openreview.net/forum?id=PePBaTdFhc | Disentangled Generative Models for Robust Prediction of System Dynamics | https://proceedings.mlr.press/v202/fotiadis23a.html | Stathi Fotiadis, Mario Lino Valencia, Shunlong Hu, Stef Garasto, Chris D Cantwell, Anil Anthony Bharath | https://proceedings.mlr.press/v202/fotiadis23a.html | ICML 2023 | The use of deep neural networks for modelling system dynamics is increasingly popular, but long-term prediction accuracy and out-of-distribution generalization still present challenges. In this study, we address these challenges by considering the parameters of dynamical systems as factors of variation of the data and leverage their ground-truth values to disentangle the representations learned by generative models. Our experimental results in phase-space and observation-space dynamics, demonstrate the effectiveness of latent-space supervision in producing disentangled representations, leading to improved long-term prediction accuracy and out-of-distribution robustness. |
https://proceedings.mlr.press/v202/fournier23a.html | https://proceedings.mlr.press/v202/fournier23a/fournier23a.pdf | https://openreview.net/forum?id=qcU9ngAPGC | Can Forward Gradient Match Backpropagation? | https://proceedings.mlr.press/v202/fournier23a.html | Louis Fournier, Stephane Rivaud, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon | https://proceedings.mlr.press/v202/fournier23a.html | ICML 2023 | Forward Gradients - the idea of using directional derivatives in forward differentiation mode - have recently been shown to be utilizable for neural network training while avoiding problems generally associated with backpropagation gradient computation, such as locking and memorization requirements. The cost is the requirement to guess the step direction, which is hard in high dimensions. While current solutions rely on weighted averages over isotropic guess vector distributions, we propose to strongly bias our gradient guesses in directions that are much more promising, such as feedback obtained from small, local auxiliary networks. For a standard computer vision neural network, we conduct a rigorous study systematically covering a variety of combinations of gradient targets and gradient guesses, including those previously presented in the literature. We find that using gradients obtained from a local loss as a candidate direction drastically improves on random noise in Forward Gradient methods. |
https://proceedings.mlr.press/v202/foussoul23a.html | https://proceedings.mlr.press/v202/foussoul23a/foussoul23a.pdf | https://openreview.net/forum?id=fnCwNbOs0S | Last Switch Dependent Bandits with Monotone Payoff Functions | https://proceedings.mlr.press/v202/foussoul23a.html | Ayoub Foussoul, Vineet Goyal, Orestis Papadigenopoulos, Assaf Zeevi | https://proceedings.mlr.press/v202/foussoul23a.html | ICML 2023 | In a recent work, Laforgue et al. introduce the model of last switch dependent (LSD) bandits, in an attempt to capture nonstationary phenomena induced by the interaction between the player and the environment. Examples include satiation, where consecutive plays of the same action lead to decreased performance, or deprivation, where the payoff of an action increases after an interval of inactivity. In this work, we take a step towards understanding the approximability of planning LSD bandits, namely, the (NP-hard) problem of computing an optimal arm-pulling strategy under complete knowledge of the model. In particular, we design the first efficient constant approximation algorithm for the problem and show that, under a natural monotonicity assumption on the payoffs, its approximation guarantee (almost) matches the state-of-the-art for the special and well-studied class of recharging bandits (also known as delay-dependent). In this attempt, we develop new tools and insights for this class of problems, including a novel higher-dimensional relaxation and the technique of mirroring the evolution of virtual states. We believe that these novel elements could potentially be used for approaching richer classes of action-induced nonstationary bandits (e.g., special instances of restless bandits). In the case where the model parameters are initially unknown, we develop an online learning adaptation of our algorithm for which we provide sublinear regret guarantees against its full-information counterpart. |
https://proceedings.mlr.press/v202/francazi23a.html | https://proceedings.mlr.press/v202/francazi23a/francazi23a.pdf | https://openreview.net/forum?id=jNpmHrHVWZ | A Theoretical Analysis of the Learning Dynamics under Class Imbalance | https://proceedings.mlr.press/v202/francazi23a.html | Emanuele Francazi, Marco Baity-Jesi, Aurelien Lucchi | https://proceedings.mlr.press/v202/francazi23a.html | ICML 2023 | Data imbalance is a common problem in machine learning that can have a critical effect on the performance of a model. Various solutions exist but their impact on the convergence of the learning dynamics is not understood. Here, we elucidate the significant negative impact of data imbalance on learning, showing that the learning curves for minority and majority classes follow sub-optimal trajectories when training with a gradient-based optimizer. This slowdown is related to the imbalance ratio and can be traced back to a competition between the optimization of different classes. Our main contribution is the analysis of the convergence of full-batch (GD) and stochastic gradient descent (SGD), and of variants that renormalize the contribution of each per-class gradient. We find that GD is not guaranteed to decrease the loss for each class but that this problem can be addressed by performing a per-class normalization of the gradient. With SGD, class imbalance has an additional effect on the direction of the gradients: the minority class suffers from a higher directional noise, which reduces the effectiveness of the per-class gradient normalization. Our findings not only allow us to understand the potential and limitations of strategies involving the per-class gradients, but also the reason for the effectiveness of previously used solutions for class imbalancesuch as oversampling. |
https://proceedings.mlr.press/v202/frantar23a.html | https://proceedings.mlr.press/v202/frantar23a/frantar23a.pdf | https://openreview.net/forum?id=gsP05g8IeK | SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot | https://proceedings.mlr.press/v202/frantar23a.html | Elias Frantar, Dan Alistarh | https://proceedings.mlr.press/v202/frantar23a.html | ICML 2023 | We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. We can execute SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, in under 4.5 hours, and can reach 60% unstructured sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches. The code is available at: https://github.com/IST-DASLab/sparsegpt. |
https://proceedings.mlr.press/v202/freed23a.html | https://proceedings.mlr.press/v202/freed23a/freed23a.pdf | https://openreview.net/forum?id=YeTYJz7th5 | Learning Temporally AbstractWorld Models without Online Experimentation | https://proceedings.mlr.press/v202/freed23a.html | Benjamin Freed, Siddarth Venkatraman, Guillaume Adrien Sartoretti, Jeff Schneider, Howie Choset | https://proceedings.mlr.press/v202/freed23a.html | ICML 2023 | Agents that can build temporally abstract representations of their environment are better able to understand their world and make plans on extended time scales, with limited computational power and modeling capacity. However, existing methods for automatically learning temporally abstract world models usually require millions of online environmental interactions and incentivize agents to reach every accessible environmental state, which is infeasible for most real-world robots both in terms of data efficiency and hardware safety. In this paper, we present an approach for simultaneously learning sets of skills and temporally abstract, skill-conditioned world models purely from offline data, enabling agents to perform zero-shot online planning of skill sequences for new tasks. We show that our approach performs comparably to or better than a wide array of state-of-the-art offline RL algorithms on a number of simulated robotics locomotion and manipulation benchmarks, while offering a higher degree of adaptability to new goals. Finally, we show that our approach offers a much higher degree of robustness to perturbations in environmental dynamics, compared to policy-based methods. |
https://proceedings.mlr.press/v202/freund23a.html | https://proceedings.mlr.press/v202/freund23a/freund23a.pdf | https://openreview.net/forum?id=laR6abCxIu | A Coupled Flow Approach to Imitation Learning | https://proceedings.mlr.press/v202/freund23a.html | Gideon Joseph Freund, Elad Sarafian, Sarit Kraus | https://proceedings.mlr.press/v202/freund23a.html | ICML 2023 | In reinforcement learning and imitation learning, an object of central importance is the state distribution induced by the policy. It plays a crucial role in the policy gradient theorem, and references to it–along with the related state-action distribution–can be found all across the literature. Despite its importance, the state distribution is mostly discussed indirectly and theoretically, rather than being modeled explicitly. The reason being an absence of appropriate density estimation tools. In this work, we investigate applications of a normalizing flow based model for the aforementioned distributions. In particular, we use a pair of flows coupled through the optimality point of the Donsker-Varadhan representation of the Kullback-Leibler (KL) divergence, for distribution matching based imitation learning. Our algorithm, Coupled Flow Imitation Learning (CFIL), achieves state-of-the-art performance on benchmark tasks with a single expert trajectory and extends naturally to a variety of other settings, including the subsampled and state-only regimes. |
https://proceedings.mlr.press/v202/fu23a.html | https://proceedings.mlr.press/v202/fu23a/fu23a.pdf | https://openreview.net/forum?id=HwbKflLo6j | Simple Hardware-Efficient Long Convolutions for Sequence Modeling | https://proceedings.mlr.press/v202/fu23a.html | Daniel Y Fu, Elliot L Epstein, Eric Nguyen, Armin W Thomas, Michael Zhang, Tri Dao, Atri Rudra, Christopher Re | https://proceedings.mlr.press/v202/fu23a.html | ICML 2023 | State space models (SSMs) have high performance on long sequence modeling but require sophisticated initialization techniques and specialized implementations for high quality and runtime performance. We study whether a simple alternative can match SSMs in performance and efficiency: directly learning long convolutions over the sequence. We find that a key requirement to achieving high performance is keeping the convolution kernels smooth. We find that simple interventions-such as squashing the kernel weights-result in smooth kernels and recover SSM performance on a range of tasks including the long range arena, image classification, language modeling, and brain data modeling. Next, we develop FlashButterfly, an IO-aware algorithm to improve the runtime performance of long convolutions. FlashButterfly appeals to classic Butterfly decompositions of the convolution to reduce GPU memory IO and increase FLOP utilization. FlashButterfly speeds up convolutions by 2.2$\times$, and allows us to train on Path256, a challenging task with sequence length 64K, where we set state-of-the-art by 29.1 points while training 7.2$\times$ faster than prior work. Lastly, we introduce an extension to FlashButterfly that learns the coefficients of the Butterfly decomposition, increasing expressivity without increasing runtime. Using this extension, we outperform a Transformer on WikiText103 by 0.2 PPL with 30% fewer parameters. |
https://proceedings.mlr.press/v202/fu23b.html | https://proceedings.mlr.press/v202/fu23b/fu23b.pdf | https://openreview.net/forum?id=OTZyQCwgNL | MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses | https://proceedings.mlr.press/v202/fu23b.html | Yang Fu, Ishan Misra, Xiaolong Wang | https://proceedings.mlr.press/v202/fu23b.html | ICML 2023 | We propose a generalizable neural radiance fields - MonoNeRF, that can be trained on large-scale monocular videos of moving in static scenes without any ground-truth annotations of depth and camera poses. MonoNeRF follows an Autoencoder-based architecture, where the encoder estimates the monocular depth and the camera pose, and the decoder constructs a Multiplane NeRF representation based on the depth encoder feature, and renders the input frames with the estimated camera. The learning is supervised by the reconstruction error. Once the model is learned, it can be applied to multiple applications including depth estimation, camera pose estimation, and single-image novel view synthesis. More qualitative results are available at: https://oasisyang.github.io/mononerf. |
https://proceedings.mlr.press/v202/fu23c.html | https://proceedings.mlr.press/v202/fu23c/fu23c.pdf | https://openreview.net/forum?id=JsAMuzA9o2 | Go Beyond Imagination: Maximizing Episodic Reachability with World Models | https://proceedings.mlr.press/v202/fu23c.html | Yao Fu, Run Peng, Honglak Lee | https://proceedings.mlr.press/v202/fu23c.html | ICML 2023 | Efficient exploration is a challenging topic in reinforcement learning, especially for sparse reward tasks. To deal with the reward sparsity, people commonly apply intrinsic rewards to motivate agents to explore the state space efficiently. In this paper, we introduce a new intrinsic reward design called GoBI - Go Beyond Imagination, which combines the traditional lifelong novelty motivation with an episodic intrinsic reward that is designed to maximize the stepwise reachability expansion. More specifically, we apply learned world models to generate predicted future states with random actions. States with more unique predictions that are not in episodic memory are assigned high intrinsic rewards. Our method greatly outperforms previous state-of-the-art methods on 12 of the most challenging Minigrid navigation tasks and improves the sample efficiency on locomotion tasks from DeepMind Control Suite. |
https://proceedings.mlr.press/v202/fu23d.html | https://proceedings.mlr.press/v202/fu23d/fu23d.pdf | https://openreview.net/forum?id=MXuLl38AEm | Specializing Smaller Language Models towards Multi-Step Reasoning | https://proceedings.mlr.press/v202/fu23d.html | Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot | https://proceedings.mlr.press/v202/fu23d.html | ICML 2023 | The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models. We show that such abilities can, in fact, be distilled down from GPT-3.5 (≥ 175B) to T5 variants (≤ 11B). We propose model specialization, to specialize the model’s ability towards a target task. The hypothesis is that large models (commonly viewed as larger than 100B) have strong modeling power such that they can perform a large spectrum of tasks. Small models (commonly viewed as smaller than 10B) have limited model capacity, but if we specialize their capacity towards a target task, the model can achieve decent performance improvements. We use multi-step math reasoning as our testbed because it is a very typical emergent ability. We show two important aspects of model abilities: (1) balancing language model’s performance on multiple tasks is a delicate matter, as improvements on one task may compromise other tasks; (2) yet by intentionally paying the price of decreased generic ability, we can clearly improve across different model scales smaller than 10B towards a specialized multi-step math reasoning ability. We further give comprehensive discussions about important design choices for better generalization, including the data format mixture and the start model checkpoint. We hope our practice and discoveries can serve as an important attempt towards specialized smaller models in the new research paradigm set by LLMs. |
https://proceedings.mlr.press/v202/fu23e.html | https://proceedings.mlr.press/v202/fu23e/fu23e.pdf | https://openreview.net/forum?id=yg4k1kYbXe | Accelerated Stochastic Optimization Methods under Quasar-convexity | https://proceedings.mlr.press/v202/fu23e.html | Qiang Fu, Dongchu Xu, Ashia Camage Wilson | https://proceedings.mlr.press/v202/fu23e.html | ICML 2023 | Non-convex optimization plays a key role in a growing number of machine learning applications. This motivates the identification of specialized structure that enables sharper theoretical analysis. One such identified structure is quasar-convexity, a non-convex generalization of convexity that subsumes convex functions. Existing algorithms for minimizing quasar-convex functions in the stochastic setting have either high complexity or slow convergence, which prompts us to derive a new class of stochastic methods for optimizing smooth quasar-convex functions. We demonstrate that our algorithms have fast convergence and outperform existing algorithms on several examples, including the classical problem of learning linear dynamical systems. We also present a unified analysis of our newly proposed algorithms and a previously studied deterministic algorithm. |
https://proceedings.mlr.press/v202/fu23f.html | https://proceedings.mlr.press/v202/fu23f/fu23f.pdf | https://openreview.net/forum?id=Rg5CRU2M4Z | Meta-learning Parameterized Skills | https://proceedings.mlr.press/v202/fu23f.html | Haotian Fu, Shangqun Yu, Saket Tiwari, Michael Littman, George Konidaris | https://proceedings.mlr.press/v202/fu23f.html | ICML 2023 | We propose a novel parameterized skill-learning algorithm that aims to learn transferable parameterized skills and synthesize them into a new action space that supports efficient learning in long-horizon tasks. We propose to leverage off-policy Meta-RL combined with a trajectory-centric smoothness term to learn a set of parameterized skills. Our agent can use these learned skills to construct a three-level hierarchical framework that models a Temporally-extended Parameterized Action Markov Decision Process. We empirically demonstrate that the proposed algorithms enable an agent to solve a set of highly difficult long-horizon (obstacle-course and robot manipulation) tasks. |
https://proceedings.mlr.press/v202/fu23g.html | https://proceedings.mlr.press/v202/fu23g/fu23g.pdf | https://openreview.net/forum?id=cHhGmXDiHp | NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations | https://proceedings.mlr.press/v202/fu23g.html | Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan Celine Lin | https://proceedings.mlr.press/v202/fu23g.html | ICML 2023 | Generalizable Neural Radiance Fields (GNeRF) are one of the most promising real-world solutions for novel view synthesis, thanks to their cross-scene generalization capability and thus the possibility of instant rendering on new scenes. While adversarial robustness is essential for real-world applications, little study has been devoted to understanding its implication on GNeRF. We hypothesize that because GNeRF is implemented by conditioning on the source views from new scenes, which are often acquired from the Internet or third-party providers, there are potential new security concerns regarding its real-world applications. Meanwhile, existing understanding and solutions for neural networks’ adversarial robustness may not be applicable to GNeRF, due to its 3D nature and uniquely diverse operations. To this end, we present NeRFool, which to the best of our knowledge is the first work that sets out to understand the adversarial robustness of GNeRF. Specifically, NeRFool unveils the vulnerability patterns and important insights regarding GNeRF’s adversarial robustness. Built upon the above insights gained from NeRFool, we further develop NeRFool$^+$, which integrates two techniques capable of effectively attacking GNeRF across a wide range of target views, and provide guidelines for defending against our proposed attacks. We believe that our NeRFool/NeRFool$^+$ lays the initial foundation for future innovations in developing robust real-world GNeRF solutions. Our codes are available at: https://github.com/GATECH-EIC/NeRFool. |
https://proceedings.mlr.press/v202/furelos-blanco23a.html | https://proceedings.mlr.press/v202/furelos-blanco23a/furelos-blanco23a.pdf | https://openreview.net/forum?id=qrH8ERUBcE | Hierarchies of Reward Machines | https://proceedings.mlr.press/v202/furelos-blanco23a.html | Daniel Furelos-Blanco, Mark Law, Anders Jonsson, Krysia Broda, Alessandra Russo | https://proceedings.mlr.press/v202/furelos-blanco23a.html | ICML 2023 | Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode subgoals of the task using high-level events. The structure of RMs enables the decomposition of a task into simpler and independently solvable subtasks that help tackle long-horizon and/or sparse reward tasks. We propose a formalism for further abstracting the subtask structure by endowing an RM with the ability to call other RMs, thus composing a hierarchy of RMs (HRM). We exploit HRMs by treating each call to an RM as an independently solvable subtask using the options framework, and describe a curriculum-based method to learn HRMs from traces observed by the agent. Our experiments reveal that exploiting a handcrafted HRM leads to faster convergence than with a flat HRM, and that learning an HRM is feasible in cases where its equivalent flat representation is not. |
https://proceedings.mlr.press/v202/gadhikar23a.html | https://proceedings.mlr.press/v202/gadhikar23a/gadhikar23a.pdf | https://openreview.net/forum?id=cKYIyT9wvo | Why Random Pruning Is All We Need to Start Sparse | https://proceedings.mlr.press/v202/gadhikar23a.html | Advait Harshal Gadhikar, Sohom Mukherjee, Rebekka Burkholz | https://proceedings.mlr.press/v202/gadhikar23a.html | ICML 2023 | Random masks define surprisingly effective sparse neural network models, as has been shown empirically. The resulting sparse networks can often compete with dense architectures and state-of-the-art lottery ticket pruning algorithms, even though they do not rely on computationally expensive prune-train iterations and can be drawn initially without significant computational overhead. We offer a theoretical explanation of how random masks can approximate arbitrary target networks if they are wider by a logarithmic factor in the inverse sparsity $1 / \log(1/\text{sparsity})$. This overparameterization factor is necessary at least for 3-layer random networks, which elucidates the observed degrading performance of random networks at higher sparsity. At moderate to high sparsity levels, however, our results imply that sparser networks are contained within random source networks so that any dense-to-sparse training scheme can be turned into a computationally more efficient sparse-to-sparse one by constraining the search to a fixed random mask. We demonstrate the feasibility of this approach in experiments for different pruning methods and propose particularly effective choices of initial layer-wise sparsity ratios of the random source network. As a special case, we show theoretically and experimentally that random source networks also contain strong lottery tickets. |
https://proceedings.mlr.press/v202/gallouedec23a.html | https://proceedings.mlr.press/v202/gallouedec23a/gallouedec23a.pdf | https://openreview.net/forum?id=4TtG42xJvC | Cell-Free Latent Go-Explore | https://proceedings.mlr.press/v202/gallouedec23a.html | Quentin Gallouédec, Emmanuel Dellandrea | https://proceedings.mlr.press/v202/gallouedec23a.html | ICML 2023 | In this paper, we introduce Latent Go-Explore (LGE), a simple and general approach based on the Go-Explore paradigm for exploration in reinforcement learning (RL). Go-Explore was initially introduced with a strong domain knowledge constraint for partitioning the state space into cells. However, in most real-world scenarios, drawing domain knowledge from raw observations is complex and tedious. If the cell partitioning is not informative enough, Go-Explore can completely fail to explore the environment. We argue that the Go-Explore approach can be generalized to any environment without domain knowledge and without cells by exploiting a learned latent representation. Thus, we show that LGE can be flexibly combined with any strategy for learning a latent representation. Our results indicate that LGE, although simpler than Go-Explore, is more robust and outperforms state-of-the-art algorithms in terms of pure exploration on multiple hard-exploration environments including Montezuma’s Revenge. The LGE implementation is available as open-source at https://github.com/qgallouedec/lge. |
https://proceedings.mlr.press/v202/gammelli23a.html | https://proceedings.mlr.press/v202/gammelli23a/gammelli23a.pdf | https://openreview.net/forum?id=rzN05i4GOE | Graph Reinforcement Learning for Network Control via Bi-Level Optimization | https://proceedings.mlr.press/v202/gammelli23a.html | Daniele Gammelli, James Harrison, Kaidi Yang, Marco Pavone, Filipe Rodrigues, Francisco C. Pereira | https://proceedings.mlr.press/v202/gammelli23a.html | ICML 2023 | Optimization problems over dynamic networks have been extensively studied and widely used in the past decades to formulate numerous real-world problems. However, (1) traditional optimization-based approaches do not scale to large networks, and (2) the design of good heuristics or approximation algorithms often requires significant manual trial-and-error. In this work, we argue that data-driven strategies can automate this process and learn efficient algorithms without compromising optimality. To do so, we present network control problems through the lens of reinforcement learning and propose a graph network-based framework to handle a broad class of problems. Instead of naively computing actions over high-dimensional graph elements, e.g., edges, we propose a bi-level formulation where we (1) specify a desired next state via RL, and (2) solve a convex program to best achieve it, leading to drastically improved scalability and performance. We further highlight a collection of desirable features to system designers, investigate design decisions, and present experiments on real-world control problems showing the utility, scalability, and flexibility of our framework. |
https://proceedings.mlr.press/v202/ganesh23a.html | https://proceedings.mlr.press/v202/ganesh23a/ganesh23a.pdf | https://openreview.net/forum?id=1d3O0b1rbL | Why Is Public Pretraining Necessary for Private Model Training? | https://proceedings.mlr.press/v202/ganesh23a.html | Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Guha Thakurta, Lun Wang | https://proceedings.mlr.press/v202/ganesh23a.html | ICML 2023 | In the privacy-utility tradeoff of a model trained on benchmark language and vision tasks, remarkable improvements have been widely reported when the model is pretrained on public data. Some gain is expected as these models inherit the benefits of transfer learning, which is the standard motivation in non-private settings. However, the stark contrast in the gain of pretraining between non-private and private machine learning suggests that the gain in the latter is rooted in a fundamentally different cause. To explain this phenomenon, we hypothesize that the non-convex loss landscape of a model training necessitates the optimization algorithm to go through two phases. In the first, the algorithm needs to select a good “basin” in the loss landscape. In the second, the algorithm solves an easy optimization within that basin. The former is a harder problem to solve with private data, while the latter is harder to solve with public data due to a distribution shift or data scarcity. Guided by this intuition, we provide theoretical constructions that provably demonstrate the separation between private training with and without public pretraining. Further, systematic experiments on CIFAR10 and Librispeech provide supporting evidence for our hypothesis. |
https://proceedings.mlr.press/v202/ganz23a.html | https://proceedings.mlr.press/v202/ganz23a/ganz23a.pdf | https://openreview.net/forum?id=9TbDVDX7de | Do Perceptually Aligned Gradients Imply Robustness? | https://proceedings.mlr.press/v202/ganz23a.html | Roy Ganz, Bahjat Kawar, Michael Elad | https://proceedings.mlr.press/v202/ganz23a.html | ICML 2023 | Adversarially robust classifiers possess a trait that non-robust models do not - Perceptually Aligned Gradients (PAG). Their gradients with respect to the input align well with human perception. Several works have identified PAG as a byproduct of robust training, but none have considered it as a standalone phenomenon nor studied its own implications. In this work, we focus on this trait and test whether Perceptually Aligned Gradients imply Robustness. To this end, we develop a novel objective to directly promote PAG in training classifiers and examine whether models with such gradients are more robust to adversarial attacks. Extensive experiments on multiple datasets and architectures validate that models with aligned gradients exhibit significant robustness, exposing the surprising bidirectional connection between PAG and robustness. Lastly, we show that better gradient alignment leads to increased robustness and harness this observation to boost the robustness of existing adversarial training techniques. |
https://proceedings.mlr.press/v202/gao23a.html | https://proceedings.mlr.press/v202/gao23a/gao23a.pdf | https://openreview.net/forum?id=AM1UcqDDDv | Solving Linear Programs with Fast Online Learning Algorithms | https://proceedings.mlr.press/v202/gao23a.html | Wenzhi Gao, Dongdong Ge, Chunlin Sun, Yinyu Ye | https://proceedings.mlr.press/v202/gao23a.html | ICML 2023 | This paper presents fast first-order methods for solving linear programs (LPs) approximately. We adapt online linear programming algorithms to offline LPs and obtain algorithms that avoid any matrix multiplication. We also introduce a variable-duplication technique that copies each variable $K$ times and reduces the optimality gap and constraint violation by a factor of $\sqrt{K}$. Furthermore, we show how online algorithms can be effectively integrated into sifting, a column generation scheme for large-scale LPs. Numerical experiments demonstrate that our methods can serve as either an approximate direct solver, or an initialization subroutine for exact LP solving. |
https://proceedings.mlr.press/v202/gao23b.html | https://proceedings.mlr.press/v202/gao23b/gao23b.pdf | https://openreview.net/forum?id=DRMh8mVEav | Gradient Descent Finds the Global Optima of Two-Layer Physics-Informed Neural Networks | https://proceedings.mlr.press/v202/gao23b.html | Yihang Gao, Yiqi Gu, Michael Ng | https://proceedings.mlr.press/v202/gao23b.html | ICML 2023 | The main aim of this paper is to conduct the convergence analysis of the gradient descent for two-layer physics-informed neural networks (PINNs). Here, the loss function involves derivatives of neural network outputs with respect to its inputs, so the interaction between the trainable parameters is more complicated compared with simple regression and classification tasks. We first develop the positive definiteness of Gram matrices and prove that the gradient flow finds the global optima of the empirical loss under over-parameterization. Then, we demonstrate that the standard gradient descent converges to the global optima of the loss with proper choices of learning rates. The framework of our analysis works for various categories of PDEs (e.g., linear second-order PDEs) and common types of network initialization (LecunUniform etc.). Our theoretical results do not need a very strict hypothesis for training samples and have a looser requirement on the network width compared with some previous works. |
https://proceedings.mlr.press/v202/gao23c.html | https://proceedings.mlr.press/v202/gao23c/gao23c.pdf | https://openreview.net/forum?id=2F3bt9s0iW | Generalizing Neural Wave Functions | https://proceedings.mlr.press/v202/gao23c.html | Nicholas Gao, Stephan Günnemann | https://proceedings.mlr.press/v202/gao23c.html | ICML 2023 | Recent neural network-based wave functions have achieved state-of-the-art accuracies in modeling ab-initio ground-state potential energy surface. However, these networks can only solve different spatial arrangements of the same set of atoms. To overcome this limitation, we present Graph-learned orbital embeddings (Globe), a neural network-based reparametrization method that can adapt neural wave functions to different molecules. Globe learns representations of local electronic structures that generalize across molecules via spatial message passing by connecting molecular orbitals to covalent bonds. Further, we propose a size-consistent wave function Ansatz, the Molecular orbital network (Moon), tailored to jointly solve Schrödinger equations of different molecules. In our experiments, we find Moon converging in 4.5 times fewer steps to similar accuracy as previous methods or to lower energies given the same time. Further, our analysis shows that Moon’s energy estimate scales additively with increased system sizes, unlike previous work where we observe divergence. In both computational chemistry and machine learning, we are the first to demonstrate that a single wave function can solve the Schrödinger equation of molecules with different atoms jointly. |
https://proceedings.mlr.press/v202/gao23d.html | https://proceedings.mlr.press/v202/gao23d/gao23d.pdf | https://openreview.net/forum?id=4JCKwAiRPX | On the Impact of Algorithmic Recourse on Social Segregation | https://proceedings.mlr.press/v202/gao23d.html | Ruijiang Gao, Himabindu Lakkaraju | https://proceedings.mlr.press/v202/gao23d.html | ICML 2023 | As predictive models seep into several real-world applications, it has become critical to ensure that individuals who are negatively impacted by the outcomes of these models are provided with a means for recourse. To this end, there has been a growing body of research on algorithmic recourse in recent years. While recourses can be extremely beneficial to affected individuals, their implementation at a large scale can lead to potential data distribution shifts and other unintended consequences. However, there is little to no research on understanding the impact of algorithmic recourse after implementation. In this work, we address the aforementioned gaps by making one of the first attempts at analyzing the delayed societal impact of algorithmic recourse. To this end, we theoretically and empirically analyze the recourses output by state-of-the-art algorithms. Our analysis demonstrates that large-scale implementation of recourses by end users may exacerbate social segregation. To address this problem, we propose novel algorithms which leverage implicit and explicit conditional generative models to not only minimize the chance of segregation but also provide realistic recourses. Extensive experimentation with real-world datasets demonstrates the efficacy of the proposed approaches. |
https://proceedings.mlr.press/v202/gao23e.html | https://proceedings.mlr.press/v202/gao23e/gao23e.pdf | https://openreview.net/forum?id=RlqgQXZx6r | DDGR: Continual Learning with Deep Diffusion-based Generative Replay | https://proceedings.mlr.press/v202/gao23e.html | Rui Gao, Weiwei Liu | https://proceedings.mlr.press/v202/gao23e.html | ICML 2023 | Popular deep-learning models in the field of image classification suffer from catastrophic forgetting—models will forget previously acquired skills when learning new ones. Generative replay (GR), which typically consists of a generator and a classifier, is an efficient way to mitigate catastrophic forgetting. However, conventional GR methods only focus on a single instruction relationship (generator-to-classifier), where the generator synthesizes samples for previous tasks to instruct the training of the classifier, while ignoring the ways in which the classifier can benefit the generator. In addition, most generative replay methods typically reuse the generated samples to update the generator, which causes the samples regenerated by the generator deviating from the distribution of previous tasks. To overcome these two issues, we propose a novel approach, called deep diffusion-based generative replay (DDGR), which adopts a diffusion model as the generator and calculates an instruction-operator through the classifier to instruct the generation of samples. Extensive experiments in class incremental (CI) and class incremental with repetition (CIR) settings demonstrate the advantages of DDGR. Our code is available at https://github.com/xiaocangshengGR/DDGR. |
https://proceedings.mlr.press/v202/gao23f.html | https://proceedings.mlr.press/v202/gao23f/gao23f.pdf | https://openreview.net/forum?id=M1fd9Z00sj | PAL: Program-aided Language Models | https://proceedings.mlr.press/v202/gao23f.html | Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig | https://proceedings.mlr.press/v202/gao23f.html | ICML 2023 | Large language models (LLMs) have demonstrated an impressive ability to perform arithmetic and symbolic reasoning tasks, when provided with a few examples at test time ("few-shot prompting"). Much of this success can be attributed to prompting methods such as "chain-of-thought", which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem. While LLMs seem to be adept at this sort of step-by-step decomposition, LLMs often make logical and arithmetic mistakes in the solution part, even when the problem is decomposed correctly. In this paper, we present Program-Aided Language models (PAL): a novel approach that uses the LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a runtime such as a Python interpreter. With PAL, decomposing the natural language problem into runnable steps remains the only learning task for the LLM, while solving is delegated to the interpreter. We demonstrate this synergy between a neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and algorithmic reasoning tasks from BIG-Bench Hard and others. In all these natural language reasoning tasks, generating code using an LLM and reasoning using a Python interpreter leads to more accurate results than much larger models. For example, PAL using Codex achieves state-of-the-art few-shot accuracy on GSM8K, surpassing PaLM which uses chain-of-thought by absolute 15% top-1. |
https://proceedings.mlr.press/v202/gao23g.html | https://proceedings.mlr.press/v202/gao23g/gao23g.pdf | https://openreview.net/forum?id=4SHQv4cp3I | Out-of-Domain Robustness via Targeted Augmentations | https://proceedings.mlr.press/v202/gao23g.html | Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy Liang | https://proceedings.mlr.press/v202/gao23g.html | ICML 2023 | Models trained on one set of domains often suffer performance drops on unseen domains, e.g., when wildlife monitoring models are deployed in new camera locations. In this work, we study principles for designing data augmentations for out-of-domain (OOD) generalization. In particular, we focus on real-world scenarios in which some domain-dependent features are robust, i.e., some features that vary across domains are predictive OOD. For example, in the wildlife monitoring application above, image backgrounds vary across camera locations but indicate habitat type, which helps predict the species of photographed animals. Motivated by theoretical analysis on a linear setting, we propose targeted augmentations, which selectively randomize spurious domain-dependent features while preserving robust ones. We prove that targeted augmentations improve OOD performance, allowing models to generalize better with fewer domains. In contrast, existing approaches such as generic augmentations, which fail to randomize domain-dependent features, and domain-invariant augmentations, which randomize all domain-dependent features, both perform poorly OOD. In experiments on three real-world datasets, we show that targeted augmentations set new states-of-the-art for OOD performance by 3.2-15.2%. |
https://proceedings.mlr.press/v202/gao23h.html | https://proceedings.mlr.press/v202/gao23h/gao23h.pdf | https://openreview.net/forum?id=bBLjms8nZE | Scaling Laws for Reward Model Overoptimization | https://proceedings.mlr.press/v202/gao23h.html | Leo Gao, John Schulman, Jacob Hilton | https://proceedings.mlr.press/v202/gao23h.html | ICML 2023 | In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much can hinder ground truth performance, in accordance with Goodhart’s law. This effect has been frequently observed, but not carefully measured due to the expense of collecting human preference data. In this work, we use a synthetic setup in which a fixed “gold-standard” reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-$n$ sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. We explore the implications of these empirical results for theoretical considerations in AI alignment. |
https://proceedings.mlr.press/v202/garcia23a.html | https://proceedings.mlr.press/v202/garcia23a/garcia23a.pdf | https://openreview.net/forum?id=zvCSNsoyKW | The Unreasonable Effectiveness of Few-shot Learning for Machine Translation | https://proceedings.mlr.press/v202/garcia23a.html | Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Melvin Johnson, Orhan Firat | https://proceedings.mlr.press/v202/garcia23a.html | ICML 2023 | We demonstrate the potential of few-shot translation systems, trained with unpaired language data, for both high and low-resource language pairs. We show that with only 5 examples of high-quality translation data shown at inference, a transformer decoder-only model trained solely with self-supervised learning, is able to match specialized supervised state-of-the-art models as well as more general commercial translation systems. In particular, we outperform the best performing system on the WMT’21 English-Chinese news translation task by only using five examples of English-Chinese parallel data at inference. Furthermore, the resulting models are two orders of magnitude smaller than state-of-the-art language models. We then analyze the factors which impact the performance of few-shot translation systems, and highlight that the quality of the few-shot demonstrations heavily determines the quality of the translations generated by our models. Finally, we show that the few-shot paradigm also provides a way to control certain attributes of the translation — we show that we are able to control for regional varieties and formality using only a five examples at inference, paving the way towards controllable machine translation systems. |
https://proceedings.mlr.press/v202/garg23a.html | https://proceedings.mlr.press/v202/garg23a/garg23a.pdf | https://openreview.net/forum?id=b0xhqwNhez | RLSbench: Domain Adaptation Under Relaxed Label Shift | https://proceedings.mlr.press/v202/garg23a.html | Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Sivaraman Balakrishnan, Zachary Chase Lipton | https://proceedings.mlr.press/v202/garg23a.html | ICML 2023 | Despite the emergence of principled methods for domain adaptation under label shift, their sensitivity to shifts in class conditional distributions is precariously under explored. Meanwhile, popular deep domain adaptation heuristics tend to falter when faced with label proportions shifts. While several papers modify these heuristics in attempts to handle label proportions shifts, inconsistencies in evaluation standards, datasets, and baselines make it difficult to gauge the current best practices. In this paper, we introduce RLSbench, a large-scale benchmark for relaxed label shift, consisting of $>$500 distribution shift pairs spanning vision, tabular, and language modalities, with varying label proportions. Unlike existing benchmarks, which primarily focus on shifts in class-conditional $p(x|y)$, our benchmark also focuses on label marginal shifts. First, we assess 13 popular domain adaptation methods, demonstrating more widespread failures under label proportion shifts than were previously known. Next, we develop an effective two-step meta-algorithm that is compatible with most domain adaptation heuristics: (i) pseudo-balance the data at each epoch; and (ii) adjust the final classifier with target label distribution estimate. The meta-algorithm improves existing domain adaptation heuristics under large label proportion shifts, often by 2–10% accuracy points, while conferring minimal effect ($<$0.5%) when label proportions do not shift. We hope that these findings and the availability of RLSbench will encourage researchers to rigorously evaluate proposed methods in relaxed label shift settings. Code is publicly available at https://github.com/acmi-lab/RLSbench. |
https://proceedings.mlr.press/v202/garrido23a.html | https://proceedings.mlr.press/v202/garrido23a/garrido23a.pdf | https://openreview.net/forum?id=neTWpgvVbo | RankMe: Assessing the Downstream Performance of Pretrained Self-Supervised Representations by Their Rank | https://proceedings.mlr.press/v202/garrido23a.html | Quentin Garrido, Randall Balestriero, Laurent Najman, Yann Lecun | https://proceedings.mlr.press/v202/garrido23a.html | ICML 2023 | Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapid development, with the emergence of many method variations but only few principled guidelines that would help practitioners to successfully deploy them. The main reason for that pitfall comes from JE-SSL’s core principle of not employing any input reconstruction therefore lacking visual cues of unsuccessful training. Adding non informative loss values to that, it becomes difficult to deploy SSL on a new dataset for which no labels can help to judge the quality of the learned representation. In this study, we develop a simple unsupervised criterion that is indicative of the quality of the learned JE-SSL representations: their effective rank. Albeit simple and computationally friendly, this method —coined RankMe— allows one to assess the performance of JE-SSL representations, even on different downstream datasets, without requiring any labels. A further benefit of RankMe is that it does not have any training or hyper-parameters to tune. Through thorough empirical experiments involving hundreds of training episodes, we demonstrate how RankMe can be used for hyperparameter selection with nearly no reduction in final performance compared to the current selection method that involve a dataset’s labels. We hope that RankMe will facilitate the deployment of JE-SSL towards domains that do not have the opportunity to rely on labels for representations’ quality assessment. |
https://proceedings.mlr.press/v202/garrido23b.html | https://proceedings.mlr.press/v202/garrido23b/garrido23b.pdf | https://openreview.net/forum?id=2sIVxJ9Hp0 | Self-supervised learning of Split Invariant Equivariant representations | https://proceedings.mlr.press/v202/garrido23b.html | Quentin Garrido, Laurent Najman, Yann Lecun | https://proceedings.mlr.press/v202/garrido23b.html | ICML 2023 | Recent progress has been made towards learning invariant or equivariant representations with self-supervised learning. While invariant methods are evaluated on large scale datasets, equivariant ones are evaluated in smaller, more controlled, settings. We aim at bridging the gap between the two in order to learn more diverse representations that are suitable for a wide range of tasks. We start by introducing a dataset called 3DIEBench, consisting of renderings from 3D models over 55 classes and more than 2.5 million images where we have full control on the transformations applied to the objects. We further introduce a predictor architecture based on hypernetworks to learn equivariant representations with no possible collapse to invariance. We introduce SIE (Split Invariant-Equivariant) which combines the hypernetwork-based predictor with representations split in two parts, one invariant, the other equivariant, to learn richer representations. We demonstrate significant performance gains over existing methods on equivariance related tasks from both a qualitative and quantitative point of view. We further analyze our introduced predictor and show how it steers the learned latent space. We hope that both our introduced dataset and approach will enable learning richer representations without supervision in more complex scenarios. Code and data are available at https://github.com/garridoq/SIE. |
https://proceedings.mlr.press/v202/gascon23a.html | https://proceedings.mlr.press/v202/gascon23a/gascon23a.pdf | https://openreview.net/forum?id=zN4oRCrlnM | Federated Heavy Hitter Recovery under Linear Sketching | https://proceedings.mlr.press/v202/gascon23a.html | Adria Gascon, Peter Kairouz, Ziteng Sun, Ananda Theertha Suresh | https://proceedings.mlr.press/v202/gascon23a.html | ICML 2023 | Motivated by real-life deployments of multi-round federated analytics with secure aggregation, we investigate the fundamental communication-accuracy tradeoffs of the heavy hitter discovery and approximate (open-domain) histogram problems under a linear sketching constraint. We propose efficient algorithms based on local subsampling and invertible bloom look-up tables (IBLTs). We also show that our algorithms are information-theoretically optimal for a broad class of interactive schemes. The results show that the linear sketching constraint does increase the communication cost for both tasks by introducing an extra linear dependence on the number of users in a round. Moreover, our results also establish a separation between the communication cost for heavy hitter discovery and approximate histogram in the multi-round setting. The dependence on the number of rounds $R$ is at most logarithmic for heavy hitter discovery whereas that of approximate histogram is $\Theta(\sqrt{R})$. We also empirically demonstrate our findings. |
https://proceedings.mlr.press/v202/gaur23a.html | https://proceedings.mlr.press/v202/gaur23a/gaur23a.pdf | https://openreview.net/forum?id=2azoCxs1jc | On the Global Convergence of Fitted Q-Iteration with Two-layer Neural Network Parametrization | https://proceedings.mlr.press/v202/gaur23a.html | Mudit Gaur, Vaneet Aggarwal, Mridul Agarwal | https://proceedings.mlr.press/v202/gaur23a.html | ICML 2023 | Deep Q-learning based algorithms have been applied successfully in many decision making problems, while their theoretical foundations are not as well understood. In this paper, we study a Fitted Q-Iteration with two-layer ReLU neural network parameterization, and find the sample complexity guarantees for the algorithm. Our approach estimates the Q-function in each iteration using a convex optimization problem. We show that this approach achieves a sample complexity of $\tilde{\mathcal{O}}(1/\epsilon^{2})$, which is order-optimal. This result holds for a countable state-spaces and does not require any assumptions such as a linear or low rank structure on the MDP. |
https://proceedings.mlr.press/v202/ge23a.html | https://proceedings.mlr.press/v202/ge23a/ge23a.pdf | https://openreview.net/forum?id=hd8wCvtgIN | A Reinforcement Learning Framework for Dynamic Mediation Analysis | https://proceedings.mlr.press/v202/ge23a.html | Lin Ge, Jitao Wang, Chengchun Shi, Zhenke Wu, Rui Song | https://proceedings.mlr.press/v202/ge23a.html | ICML 2023 | Mediation analysis learns the causal effect transmitted via mediator variables between treatments and outcomes, and receives increasing attention in various scientific domains to elucidate causal relations. Most existing works focus on point-exposure studies where each subject only receives one treatment at a single time point. However, there are a number of applications (e.g., mobile health) where the treatments are sequentially assigned over time and the dynamic mediation effects are of primary interest. Proposing a reinforcement learning (RL) framework, we are the first to evaluate dynamic mediation effects in settings with infinite horizons. We decompose the average treatment effect into an immediate direct effect, an immediate mediation effect, a delayed direct effect, and a delayed mediation effect. Upon the identification of each effect component, we further develop robust and semi-parametrically efficient estimators under the RL framework to infer these causal effects. The superior performance of the proposed method is demonstrated through extensive numerical studies, theoretical results, and an analysis of a mobile health dataset. A Python implementation of the proposed procedure is available at https://github.com/linlinlin97/MediationRL. |
https://proceedings.mlr.press/v202/geffner23a.html | https://proceedings.mlr.press/v202/geffner23a/geffner23a.pdf | https://openreview.net/forum?id=5Q5wD1sAKj | Compositional Score Modeling for Simulation-Based Inference | https://proceedings.mlr.press/v202/geffner23a.html | Tomas Geffner, George Papamakarios, Andriy Mnih | https://proceedings.mlr.press/v202/geffner23a.html | ICML 2023 | Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood Estimation methods can handle multiple observations at inference time after learning from individual observations, but they rely on standard inference methods, such as MCMC or variational inference, which come with certain performance drawbacks. We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches. We model the scores of the (diffused) posterior distributions induced by individual observations, and introduce a way of combining the learned scores to approximately sample from the target posterior distribution. Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods. |
https://proceedings.mlr.press/v202/geiping23a.html | https://proceedings.mlr.press/v202/geiping23a/geiping23a.pdf | https://openreview.net/forum?id=2snzoozOWH | Cramming: Training a Language Model on a single GPU in one day. | https://proceedings.mlr.press/v202/geiping23a.html | Jonas Geiping, Tom Goldstein | https://proceedings.mlr.press/v202/geiping23a.html | ICML 2023 | Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners. While most in the community are asking how to push the limits of extreme computation, we ask the opposite question: How far can we get with a single GPU in just one day? We investigate the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU. Aside from re-analyzing nearly all components of the pretraining pipeline for this scenario and providing a modified pipeline with performance close to BERT, we investigate why scaling down is hard, and which modifications actually improve performance in this scenario. We provide evidence that even in this constrained setting, performance closely follows scaling laws observed in large-compute settings. Through the lens of scaling laws, we categorize a range of recent improvements to training and architecture and discuss their merit and practical applicability (or lack thereof) for the limited compute setting. We provide code to reproduce all experiments at github.com/JonasGeiping/cramming . |
https://proceedings.mlr.press/v202/geisler23a.html | https://proceedings.mlr.press/v202/geisler23a/geisler23a.pdf | https://openreview.net/forum?id=a7PVyayyfp | Transformers Meet Directed Graphs | https://proceedings.mlr.press/v202/geisler23a.html | Simon Geisler, Yujia Li, Daniel J Mankowitz, Ali Taylan Cemgil, Stephan Günnemann, Cosmin Paduraru | https://proceedings.mlr.press/v202/geisler23a.html | ICML 2023 | Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs. However, transformers for directed graphs are a surprisingly underexplored topic, despite their applicability to ubiquitous domains, including source code and logic circuits. In this work, we propose two direction- and structure-aware positional encodings for directed graphs: (1) the eigenvectors of the Magnetic Laplacian — a direction-aware generalization of the combinatorial Laplacian; (2) directional random walk encodings. Empirically, we show that the extra directionality information is useful in various downstream tasks, including correctness testing of sorting networks and source code understanding. Together with a data-flow-centric graph construction, our model outperforms the prior state of the art on the Open Graph Benchmark Code2 relatively by 14.7%. |
https://proceedings.mlr.press/v202/genewein23a.html | https://proceedings.mlr.press/v202/genewein23a/genewein23a.pdf | https://openreview.net/forum?id=gyHGzyIuEJ | Memory-Based Meta-Learning on Non-Stationary Distributions | https://proceedings.mlr.press/v202/genewein23a.html | Tim Genewein, Gregoire Deletang, Anian Ruoss, Li Kevin Wenliang, Elliot Catt, Vincent Dutordoir, Jordi Grau-Moya, Laurent Orseau, Marcus Hutter, Joel Veness | https://proceedings.mlr.press/v202/genewein23a.html | ICML 2023 | Memory-based meta-learning is a technique for approximating Bayes-optimal predictors. Under fairly general conditions, minimizing sequential prediction error, measured by the log loss, leads to implicit meta-learning. The goal of this work is to investigate how far this interpretation can be realized by current sequence prediction models and training regimes. The focus is on piecewise stationary sources with unobserved switching-points, which arguably capture an important characteristic of natural language and action-observation sequences in partially observable environments. We show that various types of memory-based neural models, including Transformers, LSTMs, and RNNs can learn to accurately approximate known Bayes-optimal algorithms and behave as if performing Bayesian inference over the latent switching-points and the latent parameters governing the data distribution within each segment. |
https://proceedings.mlr.press/v202/geng23a.html | https://proceedings.mlr.press/v202/geng23a/geng23a.pdf | https://openreview.net/forum?id=gZXFNUcnHd | Towards Reliable Neural Specifications | https://proceedings.mlr.press/v202/geng23a.html | Chuqin Geng, Nham Le, Xiaojie Xu, Zhaoyue Wang, Arie Gurfinkel, Xujie Si | https://proceedings.mlr.press/v202/geng23a.html | ICML 2023 | Having reliable specifications is an unavoidable challenge in achieving verifiable correctness, robustness, and interpretability of AI systems. Existing specifications for neural networks are in the paradigm of data as specification. That is, the local neighborhood centering around a reference input is considered to be correct (or robust). While existing specifications contribute to verifying adversarial robustness, a significant problem in many research domains, our empirical study shows that those verified regions are somewhat tight, and thus fail to allow verification of test set inputs, making them impractical for some real-world applications. To this end, we propose a new family of specifications called neural representation as specification. This form of specifications uses the intrinsic information of neural networks, specifically neural activation patterns (NAPs), rather than input data to specify the correctness and/or robustness of neural network predictions. We present a simple statistical approach to mining neural activation patterns. To show the effectiveness of discovered NAPs, we formally verify several important properties, such as various types of misclassifications will never happen for a given NAP, and there is no ambiguity between different NAPs. We show that by using NAP, we can verify a significant region of the input space, while still recalling 84% of the data on MNIST. Moreover, we can push the verifiable bound to 10 times larger on the CIFAR10 benchmark. Thus, we argue that NAPs can potentially be used as a more reliable and extensible specification for neural network verification. |
https://proceedings.mlr.press/v202/gerstgrasser23a.html | https://proceedings.mlr.press/v202/gerstgrasser23a/gerstgrasser23a.pdf | https://openreview.net/forum?id=IJffiJTLhI | Oracles & Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning | https://proceedings.mlr.press/v202/gerstgrasser23a.html | Matthias Gerstgrasser, David C. Parkes | https://proceedings.mlr.press/v202/gerstgrasser23a.html | ICML 2023 | Stackelberg equilibria arise naturally in a range of popular learning problems, such as in security games or indirect mechanism design, and have received increasing attention in the reinforcement learning literature. We present a general framework for implementing Stackelberg equilibria search as a multi-agent RL problem, allowing a wide range of algorithmic design choices. We discuss how previous approaches can be seen as specific instantiations of this framework. As a key insight, we note that the design space allows for approaches not previously seen in the literature, for instance by leveraging multitask and meta-RL techniques for follower convergence. We propose one such approach using contextual policies, and evaluate it experimentally on both standard and novel benchmark domains, showing greatly improved sample efficiency compared to previous approaches. Finally, we explore the effect of adopting algorithm designs outside the borders of our framework. |
https://proceedings.mlr.press/v202/ghadiri23a.html | https://proceedings.mlr.press/v202/ghadiri23a/ghadiri23a.pdf | https://openreview.net/forum?id=XjTcC4EA4P | Approximately Optimal Core Shapes for Tensor Decompositions | https://proceedings.mlr.press/v202/ghadiri23a.html | Mehrdad Ghadiri, Matthew Fahrbach, Gang Fu, Vahab Mirrokni | https://proceedings.mlr.press/v202/ghadiri23a.html | ICML 2023 | This work studies the combinatorial optimization problem of finding an optimal core tensor shape, also called multilinear rank, for a size-constrained Tucker decomposition. We give an algorithm with provable approximation guarantees for its reconstruction error via connections to higher-order singular values. Specifically, we introduce a novel Tucker packing problem, which we prove is NP-hard, and give a polynomial-time approximation scheme based on a reduction to the 2-dimensional knapsack problem with a matroid constraint. We also generalize our techniques to tree tensor network decompositions. We implement our algorithm using an integer programming solver, and show that its solution quality is competitive with (and sometimes better than) the greedy algorithm that uses the true Tucker decomposition loss at each step, while also running up to 1000x faster. |
https://proceedings.mlr.press/v202/ghamizi23a.html | https://proceedings.mlr.press/v202/ghamizi23a/ghamizi23a.pdf | https://openreview.net/forum?id=320btOVW8R | GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks | https://proceedings.mlr.press/v202/ghamizi23a.html | Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon | https://proceedings.mlr.press/v202/ghamizi23a.html | ICML 2023 | While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models. To mitigate the costs, we propose *Guided Adversarial Training * (GAT), a novel adversarial training technique that exploits auxiliary tasks under a limited set of training data. Our approach extends single-task models into multi-task models during the min-max optimization of adversarial training, and drives the loss optimization with a regularization of the gradient curvature across multiple tasks. GAT leverages two types of auxiliary tasks: self-supervised tasks, where the labels are generated automatically, and domain-knowledge tasks, where human experts provide additional labels. Experimentally, under limited data, GAT increases the robust accuracy on CIFAR-10 up to four times (from 11% to 42% robust accuracy) and the robust AUC of CheXpert medical imaging dataset from 50% to 83%. On the full CIFAR-10 dataset, GAT outperforms eight state-of-the-art adversarial training strategies. Our large study across five datasets and six tasks demonstrates that task augmentation is an efficient alternative to data augmentation, and can be key to achieving both clean and robust performances. |
https://proceedings.mlr.press/v202/ghazi23a.html | https://proceedings.mlr.press/v202/ghazi23a/ghazi23a.pdf | https://openreview.net/forum?id=KfkSyUJyqg | On User-Level Private Convex Optimization | https://proceedings.mlr.press/v202/ghazi23a.html | Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Raghu Meka, Chiyuan Zhang | https://proceedings.mlr.press/v202/ghazi23a.html | ICML 2023 | We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees. The convergence rates of this mechanism are similar to those in the prior work of Levy et al. 2021 and Narayanan et al. 2022, but with two important improvements. Our mechanism does not require any smoothness assumptions on the loss. Furthermore, our bounds are also the first where the minimum number of users needed for user-level privacy has no dependence on the dimension and only a logarithmic dependence on the desired excess error. The main idea underlying the new mechanism is to show that the optimizers of strongly convex losses have low local deletion sensitivity, along with a new output perturbation method for functions with low local deletion sensitivity, which could be of independent interest. |
https://proceedings.mlr.press/v202/ghosal23a.html | https://proceedings.mlr.press/v202/ghosal23a/ghosal23a.pdf | https://openreview.net/forum?id=s1hrcLUcld | Contextual Reliability: When Different Features Matter in Different Contexts | https://proceedings.mlr.press/v202/ghosal23a.html | Gaurav Rohit Ghosal, Amrith Setlur, Daniel S. Brown, Anca Dragan, Aditi Raghunathan | https://proceedings.mlr.press/v202/ghosal23a.html | ICML 2023 | Deep neural networks often fail catastrophically by relying on spurious correlations. Most prior work assumes a clear dichotomy into spurious and reliable features; however, this is often unrealistic. For example, most of the time we do not want an autonomous car to simply copy the speed of surrounding cars—we don’t want our car to run a red light if a neighboring car does so. However, we cannot simply enforce invariance to next-lane speed, since it could provide valuable information about an unobservable pedestrian at a crosswalk. Thus, universally ignoring features that are sometimes (but not always) reliable can lead to non-robust performance. We formalize a new setting called contextual reliability which accounts for the fact that the "right" features to use may vary depending on the context. We propose and analyze a two-stage framework called Explicit Non-spurious feature Prediction (ENP) which first identifies the relevant features to use for a given context, then trains a model to rely exclusively on these features. Our work theoretically and empirically demonstrates the advantages of ENP over existing methods and provides new benchmarks for contextual reliability. |
https://proceedings.mlr.press/v202/ghosh23a.html | https://proceedings.mlr.press/v202/ghosh23a/ghosh23a.pdf | https://openreview.net/forum?id=Ovu1horBiZ | Reinforcement Learning from Passive Data via Latent Intentions | https://proceedings.mlr.press/v202/ghosh23a.html | Dibya Ghosh, Chethan Anand Bhateja, Sergey Levine | https://proceedings.mlr.press/v202/ghosh23a.html | ICML 2023 | Passive observational data, such as human videos, is abundant and rich in information, yet remains largely untapped by current RL methods. Perhaps surprisingly, we show that passive data, despite not having reward or action labels, can still be used to learn features that accelerate downstream RL. Our approach learns from passive data by modeling intentions: measuring how the likelihood of future outcomes change when the agent acts to achieve a particular task. We propose a temporal difference learning objective to learn about intentions, resulting in an algorithm similar to conventional RL, but which learns entirely from passive data. When optimizing this objective, our agent simultaneously learns representations of states, of policies, and of possible outcomes in an environment, all from raw observational data. Both theoretically and empirically, this scheme learns features amenable for value prediction for downstream tasks, and our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos. |
https://proceedings.mlr.press/v202/ghosh23b.html | https://proceedings.mlr.press/v202/ghosh23b/ghosh23b.pdf | https://openreview.net/forum?id=qI0l2VKp7N | Harmonic Neural Networks | https://proceedings.mlr.press/v202/ghosh23b.html | Atiyo Ghosh, Antonio Andrea Gentile, Mario Dagrada, Chul Lee, Seong-Hyok Sean Kim, Hyukgeun Cha, Yunjun Choi, Dongho Kim, Jeong-Il Kye, Vincent Emanuel Elfving | https://proceedings.mlr.press/v202/ghosh23b.html | ICML 2023 | Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell’s, Navier-Stokes equations, the heat and the wave equation. Consequently, there are many applications of harmonic functions from industrial process optimisation to robotic path planning and the calculation of first exit times of random walks. Despite their ubiquity and relevance, there have been few attempts to incorporate inductive biases towards harmonic functions in machine learning contexts. In this work, we demonstrate effective means of representing harmonic functions in neural networks and extend such results also to quantum neural networks to demonstrate the generality of our approach. We benchmark our approaches against (quantum) physics-informed neural networks, where we show favourable performance. |
https://proceedings.mlr.press/v202/ghosh23c.html | https://proceedings.mlr.press/v202/ghosh23c/ghosh23c.pdf | https://openreview.net/forum?id=0SgBUsL4W0 | Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat | https://proceedings.mlr.press/v202/ghosh23c.html | Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich | https://proceedings.mlr.press/v202/ghosh23c.html | ICML 2023 | ML model design either starts with an interpretable model or a Blackbox and explains it post hoc. Blackbox models are flexible but difficult to explain, while interpretable models are inherently explainable. Yet, interpretable models require extensive ML knowledge and tend to be less flexible, potentially underperforming than their Blackbox equivalents. This paper aims to blur the distinction between a post hoc explanation of a Blackbox and constructing interpretable models. Beginning with a Blackbox, we iteratively carve out a mixture of interpretable models and a residual network. The interpretable models identify a subset of samples and explain them using First Order Logic (FOL), providing basic reasoning on concepts from the Blackbox. We route the remaining samples through a flexible residual. We repeat the method on the residual network until all the interpretable models explain the desired proportion of data. Our extensive experiments show that our route, interpret, and repeat approach (1) identifies a richer diverse set of instance-specific concepts with high concept completeness via interpretable models by specializing in various subsets of data without compromising in performance, (2) identifies the relatively “harder” samples to explain via residuals, (3) outperforms the interpretable by-design models by significant margins during test-time interventions, (4) can be used to fix the shortcut learned by the original Blackbox. |
https://proceedings.mlr.press/v202/giannou23a.html | https://proceedings.mlr.press/v202/giannou23a/giannou23a.pdf | https://openreview.net/forum?id=fiHVIUkulb | Looped Transformers as Programmable Computers | https://proceedings.mlr.press/v202/giannou23a.html | Angeliki Giannou, Shashank Rajput, Jy-Yong Sohn, Kangwook Lee, Jason D. Lee, Dimitris Papailiopoulos | https://proceedings.mlr.press/v202/giannou23a.html | ICML 2023 | We present a framework for using transformer networks as universal computers by programming them with specific weights and placing them in a loop. Our input sequence acts as a punchcard, consisting of instructions and memory for data read/writes. We demonstrate that a constant number of encoder layers can emulate basic computing blocks, including lexicographic operations, non-linear functions, function calls, program counters, and conditional branches. Using this framework, we emulate a computer using a simple instruction-set architecture, which allows us to map iterative algorithms to programs that can be executed by a constant depth looped transformer network. We show how a single frozen transformer, instructed by its input, can emulate a basic calculator, a basic linear algebra library, and even a full backpropagation, in-context learning algorithm. Our findings reveal the potential of transformer networks as programmable compute units and offer insight into the mechanics of attention. |
https://proceedings.mlr.press/v202/giuliani23a.html | https://proceedings.mlr.press/v202/giuliani23a/giuliani23a.pdf | https://openreview.net/forum?id=IP5OpHHpgV | Generalized Disparate Impact for Configurable Fairness Solutions in ML | https://proceedings.mlr.press/v202/giuliani23a.html | Luca Giuliani, Eleonora Misino, Michele Lombardi | https://proceedings.mlr.press/v202/giuliani23a.html | ICML 2023 | We make two contributions in the field of AI fairness over continuous protected attributes. First, we show that the Hirschfeld-Gebelein-Renyi (HGR) indicator (the only one currently available for such a case) is valuable but subject to a few crucial limitations regarding semantics, interpretability, and robustness. Second, we introduce a family of indicators that are: 1) complementary to HGR in terms of semantics; 2) fully interpretable and transparent; 3) robust over finite samples; 4) configurable to suit specific applications. Our approach also allows us to define fine-grained constraints to permit certain types of dependence and forbid others selectively. By expanding the available options for continuous protected attributes, our approach represents a significant contribution to the area of fair artificial intelligence. |
https://proceedings.mlr.press/v202/globus-harris23a.html | https://proceedings.mlr.press/v202/globus-harris23a/globus-harris23a.pdf | https://openreview.net/forum?id=RrusCGfAZ1 | Multicalibration as Boosting for Regression | https://proceedings.mlr.press/v202/globus-harris23a.html | Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, Jessica Sorrell | https://proceedings.mlr.press/v202/globus-harris23a.html | ICML 2023 | We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a “swap regret” like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class $\mathcal{H}$ that makes use only of a standard squared error regression oracle for $\mathcal{H}$. We give a weak learning assumption on $\mathcal{H}$ that ensures convergence to Bayes optimality without the need to make any realizability assumptions — giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on $\mathcal{H}$ is both necessary and sufficient for multicalibration with respect to $\mathcal{H}$ to imply Bayes optimality, answering an open question. We also show that if $\mathcal{H}$ satisfies our weak learning condition relative to another class $\mathcal{C}$ then multicalibration with respect to $\mathcal{H}$ implies multicalibration with respect to $\mathcal{C}$. Finally we investigate the empirical performance of our algorithm experimentally. |
https://proceedings.mlr.press/v202/gloeckler23a.html | https://proceedings.mlr.press/v202/gloeckler23a/gloeckler23a.pdf | https://openreview.net/forum?id=O7t2ZqUk7y | Adversarial robustness of amortized Bayesian inference | https://proceedings.mlr.press/v202/gloeckler23a.html | Manuel Gloeckler, Michael Deistler, Jakob H. Macke | https://proceedings.mlr.press/v202/gloeckler23a.html | ICML 2023 | Bayesian inference usually requires running potentially costly inference procedures separately for every new observation. In contrast, the idea of amortized Bayesian inference is to initially invest computational cost in training an inference network on simulated data, which can subsequently be used to rapidly perform inference (i.e., to return estimates of posterior distributions) for new observations. This approach has been applied to many real-world models in the sciences and engineering, but it is unclear how robust the approach is to adversarial perturbations in the observed data. Here, we study the adversarial robustness of amortized Bayesian inference, focusing on simulation-based estimation of multi-dimensional posterior distributions. We show that almost unrecognizable, targeted perturbations of the observations can lead to drastic changes in the predicted posterior and highly unrealistic posterior predictive samples, across several benchmark tasks and a real-world example from neuroscience. We propose a computationally efficient regularization scheme based on penalizing the Fisher information of the conditional density estimator, and show how it improves the adversarial robustness of amortized Bayesian inference. |
https://proceedings.mlr.press/v202/gmelin23a.html | https://proceedings.mlr.press/v202/gmelin23a/gmelin23a.pdf | https://openreview.net/forum?id=kWS8mpioS9 | Efficient RL via Disentangled Environment and Agent Representations | https://proceedings.mlr.press/v202/gmelin23a.html | Kevin Gmelin, Shikhar Bahl, Russell Mendonca, Deepak Pathak | https://proceedings.mlr.press/v202/gmelin23a.html | ICML 2023 | Agents that are aware of the separation between the environments and themselves can leverage this understanding to form effective representations of visual input. We propose an approach for learning such structured representations for RL algorithms, using visual knowledge of the agent, which is often inexpensive to obtain, such as its shape or mask. This is incorporated into the RL objective using a simple auxiliary loss. We show that our method, SEAR (Structured Environment-Agent Representations), outperforms state-of-the-art model-free approaches over 18 different challenging visual simulation environments spanning 5 different robots. |
https://proceedings.mlr.press/v202/go23a.html | https://proceedings.mlr.press/v202/go23a/go23a.pdf | https://openreview.net/forum?id=ttga7UlrsE | Aligning Language Models with Preferences through $f$-divergence Minimization | https://proceedings.mlr.press/v202/go23a.html | Dongyoung Go, Tomasz Korbak, Germàn Kruszewski, Jos Rozen, Nahyeon Ryu, Marc Dymetman | https://proceedings.mlr.press/v202/go23a.html | ICML 2023 | Aligning language models with preferences can be posed as approximating a target distribution representing some desired behavior. Existing approaches differ both in the functional form of the target distribution and the algorithm used to approximate it. For instance, Reinforcement Learning from Human Feedback (RLHF) corresponds to minimizing a reverse KL from an implicit target distribution arising from a KL penalty in the objective. On the other hand, Generative Distributional Control (GDC) has an explicit target distribution and minimizes a forward KL from it using the Distributional Policy Gradient (DPG) algorithm. In this paper, we propose a new approach, $f$-DPG, which allows the use of any $f$-divergence to approximate any target distribution that can be evaluated. $f$-DPG unifies both frameworks (RLHF, GDC) and the approximation methods (DPG, RL with KL penalties). We show the practical benefits of various choices of divergence objectives and demonstrate that there is no universally optimal objective but that different divergences present different alignment and diversity trade-offs. We show that Jensen-Shannon divergence strikes a good balance between these objectives, and frequently outperforms forward KL divergence by a wide margin, leading to significant improvements over prior work. These distinguishing characteristics between divergences persist as the model size increases, highlighting the importance of selecting appropriate divergence objectives. |
https://proceedings.mlr.press/v202/goibert23a.html | https://proceedings.mlr.press/v202/goibert23a/goibert23a.pdf | https://openreview.net/forum?id=aIEL5ht9Sx | Robust Consensus in Ranking Data Analysis: Definitions, Properties and Computational Issues | https://proceedings.mlr.press/v202/goibert23a.html | Morgane Goibert, Clément Calauzènes, Ekhine Irurozki, Stephan Clémençon | https://proceedings.mlr.press/v202/goibert23a.html | ICML 2023 | As the issue of robustness in AI systems becomes vital, statistical learning techniques that are reliable even in presence of partly contaminated data have to be developed. Preference data, in the form of (complete) rankings in the simplest situations, are no exception and the demand for appropriate concepts and tools is all the more pressing given that technologies fed by or producing this type of data ($\textit{e.g.}$ search engines, recommending systems) are now massively deployed. However, the lack of vector space structure for the set of rankings ($\textit{i.e.}$ the symmetric group $\mathfrak{S}_n$) and the complex nature of statistics considered in ranking data analysis make the formulation of robustness objectives in this domain challenging. In this paper, we introduce notions of robustness, together with dedicated statistical methods, for $\textit{Consensus Ranking}$ the flagship problem in ranking data analysis, aiming at summarizing a probability distribution on $\mathfrak{S}_n$ by a $\textit{median}$ ranking. Precisely, we propose specific extensions of the popular concept of breakdown point, tailored to consensus ranking, and address the related computational issues. Beyond the theoretical contributions, the relevance of the approach proposed is supported by an experimental study. |
https://proceedings.mlr.press/v202/gong23a.html | https://proceedings.mlr.press/v202/gong23a/gong23a.pdf | https://openreview.net/forum?id=hdGyjAnqZG | Learning Distributions over Quantum Measurement Outcomes | https://proceedings.mlr.press/v202/gong23a.html | Weiyuan Gong, Scott Aaronson | https://proceedings.mlr.press/v202/gong23a.html | ICML 2023 | Shadow tomography for quantum states provides a sample efficient approach for predicting the measurement outcomes of quantum systems. However, these shadow tomography procedures yield poor bounds if there are more than two outcomes per measurement. In this paper, we consider a general problem of learning properties from quantum states: given an unknown $d$-dimensional quantum state $\rho$ and $M$ unknown quantum measurements $\mathcal{M}_1,...,\mathcal{M}_M$ with $K\geq 2$ outcomes, estimating the probability distribution for applying $\mathcal{M}_i$ on $\rho$ to within total variation distance $\epsilon$. Compared to the special case when $K=2$, we have to learn unknown distributions instead of values. Here, we propose an online shadow tomography procedure that solves this problem with high success probability requiring $\tilde{O}(K\log^2M\log d/\epsilon^4)$ copies of $\rho$. We further prove an information-theoretic lower bound showing that at least $\Omega(\min\{d^2,K+\log M\}/\epsilon^2)$ copies of $\rho$ are required to solve this problem with high success probability. Our shadow tomography procedure requires sample complexity with only logarithmic dependence on $M$ and $d$ and is sample-optimal concerning the dependence on $K$. |
https://proceedings.mlr.press/v202/gorbunov23a.html | https://proceedings.mlr.press/v202/gorbunov23a/gorbunov23a.pdf | https://openreview.net/forum?id=dvu47LPkEV | Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity | https://proceedings.mlr.press/v202/gorbunov23a.html | Eduard Gorbunov, Adrien Taylor, Samuel Horváth, Gauthier Gidel | https://proceedings.mlr.press/v202/gorbunov23a.html | ICML 2023 | Algorithms for min-max optimization and variational inequalities are often studied under monotonicity assumptions. Motivated by non-monotone machine learning applications, we follow the line of works (Diakonikolas et al., 2021; Lee & Kim, 2021; Pethick et al., 2022; Bohm,2022) aiming at going beyond monotonicity by considering the weaker negative comonotonicity assumption. In this work, we provide tight complexity analyses for the Proximal Point (PP), Extragradient (EG), and Optimistic Gradient (OG) methods in this setup, closing several questions on their working guarantees beyond monotonicity. In particular, we derive the first non-asymptotic convergence rates for PP under negative comonotonicity and star-negative comonotonicity and show their tightness via constructing worst-case examples; we also relax the assumptions for the last-iterate convergence guarantees for EG and OG and prove the tightness of the existing best-iterate guarantees for EG and OG via constructing counter-examples. |
https://proceedings.mlr.press/v202/goshtasbpour23a.html | https://proceedings.mlr.press/v202/goshtasbpour23a/goshtasbpour23a.pdf | https://openreview.net/forum?id=x0AppdesIM | Adaptive Annealed Importance Sampling with Constant Rate Progress | https://proceedings.mlr.press/v202/goshtasbpour23a.html | Shirin Goshtasbpour, Victor Cohen, Fernando Perez-Cruz | https://proceedings.mlr.press/v202/goshtasbpour23a.html | ICML 2023 | Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution given its unnormalized density function. This algorithm relies on a sequence of interpolating distributions bridging the target to an initial tractable distribution such as the well-known geometric mean path of unnormalized distributions which is assumed to be suboptimal in general. In this paper, we prove that the geometric annealing corresponds to the distribution path that minimizes the KL divergence between the current particle distribution and the desired target when the feasible change in the particle distribution is constrained. Following this observation, we derive the constant rate discretization schedule for this annealing sequence, which adjusts the schedule to the difficulty of moving samples between the initial and the target distributions. We further extend our results to $f$-divergences and present the respective dynamics of annealing sequences based on which we propose the Constant Rate AIS (CR-AIS) algorithm and its efficient implementation for $\alpha$-divergences. We empirically show that CR-AIS performs well on multiple benchmark distributions while avoiding the computationally expensive tuning loop in existing Adaptive AIS. |
https://proceedings.mlr.press/v202/graham23a.html | https://proceedings.mlr.press/v202/graham23a/graham23a.pdf | https://openreview.net/forum?id=HWhaVJA2eb | Formalizing Preferences Over Runtime Distributions | https://proceedings.mlr.press/v202/graham23a.html | Devon R. Graham, Kevin Leyton-Brown, Tim Roughgarden | https://proceedings.mlr.press/v202/graham23a.html | ICML 2023 | When trying to solve a computational problem, we are often faced with a choice between algorithms that are guaranteed to return the right answer but differ in their runtime distributions (e.g., SAT solvers, sorting algorithms). This paper aims to lay theoretical foundations for such choices by formalizing preferences over runtime distributions. It might seem that we should simply prefer the algorithm that minimizes expected runtime. However, such preferences would be driven by exactly how slow our algorithm is on bad inputs, whereas in practice we are typically willing to cut off occasional, sufficiently long runs before they finish. We propose a principled alternative, taking a utility-theoretic approach to characterize the scoring functions that describe preferences over algorithms. These functions depend on the way our value for solving our problem decreases with time and on the distribution from which captimes are drawn. We describe examples of realistic utility functions and show how to leverage a maximum-entropy approach for modeling underspecified captime distributions. Finally, we show how to efficiently estimate an algorithm’s expected utility from runtime samples. |
https://proceedings.mlr.press/v202/grande23a.html | https://proceedings.mlr.press/v202/grande23a/grande23a.pdf | https://openreview.net/forum?id=q2L5r7WEHT | Topological Point Cloud Clustering | https://proceedings.mlr.press/v202/grande23a.html | Vincent Peter Grande, Michael T Schaub | https://proceedings.mlr.press/v202/grande23a.html | ICML 2023 | We present Topological Point Cloud Clustering (TPCC), a new method to cluster points in an arbitrary point cloud based on their contribution to global topological features. TPCC synthesizes desirable features from spectral clustering and topological data analysis and is based on considering the spectral properties of a simplicial complex associated to the considered point cloud. As it is based on considering sparse eigenvector computations, TPCC is similarly easy to interpret and implement as spectral clustering. However, by focusing not just on a single matrix associated to a graph created from the point cloud data, but on a whole set of Hodge-Laplacians associated to an appropriately constructed simplicial complex, we can leverage a far richer set of topological features to characterize the data points within the point cloud and benefit from the relative robustness of topological techniques against noise. We test the performance of TPCC on both synthetic and real-world data and compare it with classical spectral clustering. |
https://proceedings.mlr.press/v202/grenioux23a.html | https://proceedings.mlr.press/v202/grenioux23a/grenioux23a.pdf | https://openreview.net/forum?id=NfH2HRL8u6 | On Sampling with Approximate Transport Maps | https://proceedings.mlr.press/v202/grenioux23a.html | Louis Grenioux, Alain Oliviero Durmus, Eric Moulines, Marylou Gabrié | https://proceedings.mlr.press/v202/grenioux23a.html | ICML 2023 | Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle. The potential of this approach has risen with the development of Normalizing Flows (NF) which are maps parameterized with deep neural networks trained to push a reference distribution towards a target. NF-enhanced samplers recently proposed blend (Markov chain) Monte Carlo methods with either (i) proposal draws from the flow or (ii) a flow-based reparametrization. In both cases, the quality of the learned transport conditions performance. The present work clarifies for the first time the relative strengths and weaknesses of these two approaches. Our study concludes that multimodal targets can be reliably handled with flow-based proposals up to moderately high dimensions. In contrast, methods relying on reparametrization struggle with multimodality but are more robust otherwise in high-dimensional settings and under poor training. To further illustrate the influence of target-proposal adequacy, we also derive a new quantitative bound for the mixing time of the Independent Metropolis-Hastings sampler. |
https://proceedings.mlr.press/v202/grigsby23a.html | https://proceedings.mlr.press/v202/grigsby23a/grigsby23a.pdf | https://openreview.net/forum?id=rGL49h4x9h | Hidden Symmetries of ReLU Networks | https://proceedings.mlr.press/v202/grigsby23a.html | Elisenda Grigsby, Kathryn Lindsey, David Rolnick | https://proceedings.mlr.press/v202/grigsby23a.html | ICML 2023 | The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation? It is known that many different parameter settings $\theta$ can determine the same function $f$. Moreover, the degree of this redundancy is inhomogeneous: for some networks, the only symmetries are permutation of neurons in a layer and positive scaling of parameters at a neuron, while other networks admit additional hidden symmetries. In this work, we prove that, for any network architecture where no layer is narrower than the input, there exist parameter settings with no hidden symmetries. We also describe a number of mechanisms through which hidden symmetries can arise, and empirically approximate the functional dimension of different network architectures at initialization. These experiments indicate that the probability that a network has no hidden symmetries decreases towards 0 as depth increases, while increasing towards 1 as width and input dimension increase. |
https://proceedings.mlr.press/v202/gruntkowska23a.html | https://proceedings.mlr.press/v202/gruntkowska23a/gruntkowska23a.pdf | https://openreview.net/forum?id=kdkkLwyJe1 | EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression | https://proceedings.mlr.press/v202/gruntkowska23a.html | Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik | https://proceedings.mlr.press/v202/gruntkowska23a.html | ICML 2023 | In this work we focus our attention on distributed optimization problems in the context where the communication time between the server and the workers is non-negligible. We obtain novel methods supporting bidirectional compression (both from the server to the workers and vice versa) that enjoy new state-of-the-art theoretical communication complexity for convex and nonconvex problems. Our bounds are the first that manage to decouple the variance/error coming from the workers-to-server and server-to-workers compression, transforming a multiplicative dependence to an additive one. Moreover, in the convex regime, we obtain the first bounds that match the theoretical communication complexity of gradient descent. Even in this convex regime, our algorithms work with biased gradient estimators, which is non-standard and requires new proof techniques that may be of independent interest. Finally, our theoretical results are corroborated through suitable experiments. |
https://proceedings.mlr.press/v202/gu23a.html | https://proceedings.mlr.press/v202/gu23a/gu23a.pdf | https://openreview.net/forum?id=cZZfXm6wZm | NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion | https://proceedings.mlr.press/v202/gu23a.html | Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M. Susskind, Christian Theobalt, Lingjie Liu, Ravi Ramamoorthi | https://proceedings.mlr.press/v202/gu23a.html | ICML 2023 | Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input. Existing approaches condition neural radiance fields (NeRF) on local image features, projecting points to the input image plane, and aggregating 2D features to perform volume rendering. However, under severe occlusion, this projection fails to resolve uncertainty, resulting in blurry renderings that lack details. In this work, we propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test-time. We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views. Our approach significantly outperforms existing NeRF-based and geometry-free approaches on challenging datasets including ShapeNet, ABO, and Clevr3D. |
https://proceedings.mlr.press/v202/guan23a.html | https://proceedings.mlr.press/v202/guan23a/guan23a.pdf | https://openreview.net/forum?id=9qy9DizMlr | DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design | https://proceedings.mlr.press/v202/guan23a.html | Jiaqi Guan, Xiangxin Zhou, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma, Qiang Liu, Liang Wang, Quanquan Gu | https://proceedings.mlr.press/v202/guan23a.html | ICML 2023 | Designing 3D ligands within a target binding site is a fundamental task in drug discovery. Existing structured-based drug design methods treat all ligand atoms equally, which ignores different roles of atoms in the ligand for drug design and can be less efficient for exploring the large drug-like molecule space. In this paper, inspired by the convention in pharmaceutical practice, we decompose the ligand molecule into two parts, namely arms and scaffold, and propose a new diffusion model, DecompDiff, with decomposed priors over arms and scaffold. In order to facilitate the decomposed generation and improve the properties of the generated molecules, we incorporate both bond diffusion in the model and additional validity guidance in the sampling phase. Extensive experiments on CrossDocked2020 show that our approach achieves state-of-the-art performance in generating high-affinity molecules while maintaining proper molecular properties and conformational stability, with up to $-8.39$ Avg. Vina Dock score and $24.5%$ Success Rate. The code is provided at https://github.com/bytedance/DecompDiff |
https://proceedings.mlr.press/v202/guha23a.html | https://proceedings.mlr.press/v202/guha23a/guha23a.pdf | https://openreview.net/forum?id=X9enIC31dY | On Excess Mass Behavior in Gaussian Mixture Models with Orlicz-Wasserstein Distances | https://proceedings.mlr.press/v202/guha23a.html | Aritra Guha, Nhat Ho, Xuanlong Nguyen | https://proceedings.mlr.press/v202/guha23a.html | ICML 2023 | Dirichlet Process mixture models (DPMM) in combination with Gaussian kernels have been an important modeling tool for numerous data domains arising from biological, physical, and social sciences. However, this versatility in applications does not extend to strong theoretical guarantees for the underlying parameter estimates, for which only a logarithmic rate is achieved. In this work, we (re)introduce and investigate a metric, named Orlicz-Wasserstein distance, in the study of the Bayesian contraction behavior for the parameters. We show that despite the overall slow convergence guarantees for all the parameters, posterior contraction for parameters happens at almost polynomial rates in outlier regions of the parameter space. Our theoretical results provide new insight in understanding the convergence behavior of parameters arising from various settings of hierarchical Bayesian nonparametric models. In addition, we provide an algorithm to compute the metric by leveraging Sinkhorn divergences and validate our findings through a simulation study. |
https://proceedings.mlr.press/v202/guha23b.html | https://proceedings.mlr.press/v202/guha23b/guha23b.pdf | https://openreview.net/forum?id=u1fhtP15l5 | Conformalization of Sparse Generalized Linear Models | https://proceedings.mlr.press/v202/guha23b.html | Etash Kumar Guha, Eugene Ndiaye, Xiaoming Huo | https://proceedings.mlr.press/v202/guha23b.html | ICML 2023 | Given a sequence of observable variables $\{(x_1, y_1), \ldots, (x_n, y_n)\}$, the conformal prediction method estimates a confidence set for $y_{n+1}$ given $x_{n+1}$ that is valid for any finite sample size by merely assuming that the joint distribution of the data is permutation invariant. Although attractive, computing such a set is computationally infeasible in most regression problems. Indeed, in these cases, the unknown variable $y_{n+1}$ can take an infinite number of possible candidate values, and generating conformal sets requires retraining a predictive model for each candidate. In this paper, we focus on a sparse linear model with only a subset of variables for prediction and use numerical continuation techniques to approximate the solution path efficiently. The critical property we exploit is that the set of selected variables is invariant under a small perturbation of the input data. Therefore, it is sufficient to enumerate and refit the model only at the change points of the set of active features and smoothly interpolate the rest of the solution via a Predictor-Corrector mechanism. We show how our path-following algorithm accurately approximates conformal prediction sets and illustrate its performance using synthetic and real data examples. |
https://proceedings.mlr.press/v202/guo23a.html | https://proceedings.mlr.press/v202/guo23a/guo23a.pdf | https://openreview.net/forum?id=Otdp5SGQMr | Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design | https://proceedings.mlr.press/v202/guo23a.html | Chuan Guo, Kamalika Chaudhuri, Pierre Stock, Michael Rabbat | https://proceedings.mlr.press/v202/guo23a.html | ICML 2023 | In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model. The main challenge in this setting is balancing privacy with both classification accuracy of the learnt model as well as the number of bits communicated between the clients and server. Prior work has achieved a good trade-off by designing a privacy-aware compression mechanism, called the minimum variance unbiased (MVU) mechanism, that numerically solves an optimization problem to determine the parameters of the mechanism. This paper builds upon it by introducing a new interpolation procedure in the numerical design process that allows for a far more efficient privacy analysis. The result is the new Interpolated MVU mechanism that is more scalable, has a better privacy-utility trade-off, and provides SOTA results on communication-efficient private FL on a variety of datasets. |
https://proceedings.mlr.press/v202/guo23b.html | https://proceedings.mlr.press/v202/guo23b/guo23b.pdf | https://openreview.net/forum?id=JC05k0E2EM | Out-of-Distribution Generalization of Federated Learning via Implicit Invariant Relationships | https://proceedings.mlr.press/v202/guo23b.html | Yaming Guo, Kai Guo, Xiaofeng Cao, Tieru Wu, Yi Chang | https://proceedings.mlr.press/v202/guo23b.html | ICML 2023 | Out-of-distribution generalization is challenging for non-participating clients of federated learning under distribution shifts. A proven strategy is to explore those invariant relationships between input and target variables, working equally well for non-participating clients. However, learning invariant relationships is often in an explicit manner from data, representation, and distribution, which violates the federated principles of privacy-preserving and limited communication. In this paper, we propose FedIIR, which implicitly learns invariant relationships from parameter for out-of-distribution generalization, adhering to the above principles. Specifically, we utilize the prediction disagreement to quantify invariant relationships and implicitly reduce it through inter-client gradient alignment. Theoretically, we demonstrate the range of non-participating clients to which FedIIR is expected to generalize and present the convergence results for FedIIR in the massively distributed with limited communication. Extensive experiments show that FedIIR significantly outperforms relevant baselines in terms of out-of-distribution generalization of federated learning. |
https://proceedings.mlr.press/v202/guo23c.html | https://proceedings.mlr.press/v202/guo23c/guo23c.pdf | https://openreview.net/forum?id=C7fNCYdptO | FeDXL: Provable Federated Learning for Deep X-Risk Optimization | https://proceedings.mlr.press/v202/guo23c.html | Zhishuai Guo, Rong Jin, Jiebo Luo, Tianbao Yang | https://proceedings.mlr.press/v202/guo23c.html | ICML 2023 | In this paper, we tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing FL algorithms are applicable. In particular, the objective has the form of $\mathbb{E}_{\mathbf{z}\sim \mathcal{S}_1} f(\mathbb{E}_{\mathbf{z}’\sim\mathcal{S}_2} \ell(\mathbf{w}; \mathbf{z}, \mathbf{z}’))$, where two sets of data $\mathcal S_1, \mathcal S_2$ are distributed over multiple machines, $\ell(\cdot; \cdot,\cdot)$ is a pairwise loss that only depends on the prediction outputs of the input data pairs $(\mathbf{z}, \mathbf{z}’)$. This problem has important applications in machine learning, e.g., AUROC maximization with a pairwise loss, and partial AUROC maximization with a compositional loss. The challenges for designing an FL algorithm for X-risks lie in the non-decomposability of the objective over multiple machines and the interdependency between different machines. To this end, we propose an active-passive decomposition framework that decouples the gradient’s components with two types, namely active parts and passive parts, where the active parts depend on local data that are computed with the local model and the passive parts depend on other machines that are communicated/computed based on historical models and samples. Under this framework, we design two FL algorithms (FeDXL) for handling linear and nonlinear $f$, respectively, based on federated averaging and merging and develop a novel theoretical analysis to combat the latency of the passive parts and the interdependency between the local model parameters and the involved data for computing local gradient estimators. We establish both iteration and communication complexities and show that using the historical samples and models for computing the passive parts do not degrade the complexities. We conduct empirical studies of FeDXL for deep AUROC and partial AUROC maximization, and demonstrate their performance compared with several baselines. |
https://proceedings.mlr.press/v202/guo23d.html | https://proceedings.mlr.press/v202/guo23d/guo23d.pdf | https://openreview.net/forum?id=MI5YpKX84O | Provably Efficient Representation Learning with Tractable Planning in Low-Rank POMDP | https://proceedings.mlr.press/v202/guo23d.html | Jiacheng Guo, Zihao Li, Huazheng Wang, Mengdi Wang, Zhuoran Yang, Xuezhou Zhang | https://proceedings.mlr.press/v202/guo23d.html | ICML 2023 | In this paper, we study representation learning in partially observable Markov Decision Processes (POMDPs), where the agent learns a decoder function that maps a series of high-dimensional raw observations to a compact representation and uses it for more efficient exploration and planning. We focus our attention on the sub-classes of $\gamma$-observable and decodable POMDPs, for which it has been shown that statistically tractable learning is possible, but there has not been any computationally efficient algorithm. We first present an algorithm for decodable PMMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU) to perform representation learning and achieve efficient sample complexity, while only calling supervised learning computational oracles. We then show how to adapt this algorithm to also work in the broader class of $\gamma$-observable POMDPs. |
https://proceedings.mlr.press/v202/guo23e.html | https://proceedings.mlr.press/v202/guo23e/guo23e.pdf | https://openreview.net/forum?id=Fry8Yz5Ngl | Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano | https://proceedings.mlr.press/v202/guo23e.html | Chuan Guo, Alexandre Sablayrolles, Maziar Sanjabi | https://proceedings.mlr.press/v202/guo23e.html | ICML 2023 | Differential privacy (DP) is by far the most widely accepted framework for mitigating privacy risks in machine learning. However, exactly how small the privacy parameter $\epsilon$ needs to be to protect against certain privacy risks in practice is still not well-understood. In this work, we study data reconstruction attacks for discrete data and analyze it under the framework of multiple hypothesis testing. For a learning algorithm satisfying $(\alpha, \epsilon)$-Renyi DP, we utilize different variants of the celebrated Fano’s inequality to upper bound the attack advantage of a data reconstruction adversary. Our bound can be numerically computed to relate the parameter $\epsilon$ to the desired level of privacy protection in practice, and complements the empirical evidence for the effectiveness of DP against data reconstruction attacks even at relatively large values of $\epsilon$. |
https://proceedings.mlr.press/v202/guo23f.html | https://proceedings.mlr.press/v202/guo23f/guo23f.pdf | https://openreview.net/forum?id=TAwB7FsoJt | Linkless Link Prediction via Relational Distillation | https://proceedings.mlr.press/v202/guo23f.html | Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V Chawla, Neil Shah, Tong Zhao | https://proceedings.mlr.press/v202/guo23f.html | ICML 2023 | Graph Neural Networks (GNNs) have shown exceptional performance in the task of link prediction. Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments. Conversely, the known efficient MLPs are much less effective than GNNs due to the lack of relational knowledge. In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i.e., predicted logit-based matching and node representation-based matching. Upon observing direct KD analogs do not perform well for link prediction, we propose a relational KD framework, Linkless Link Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike simple KD methods that match independent link logits or node representations, LLP distills relational knowledge that is centered around each (anchor) node to the student MLP. Specifically, we propose rank-based matching and distribution-based matching strategies that complement each other. Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68x speedup in link prediction inference compared to GNNs on the large-scale OGB dataset. |
https://proceedings.mlr.press/v202/guo23g.html | https://proceedings.mlr.press/v202/guo23g/guo23g.pdf | https://openreview.net/forum?id=nDKoVwNjMH | FedBR: Improving Federated Learning on Heterogeneous Data via Local Learning Bias Reduction | https://proceedings.mlr.press/v202/guo23g.html | Yongxin Guo, Xiaoying Tang, Tao Lin | https://proceedings.mlr.press/v202/guo23g.html | ICML 2023 | Federated Learning (FL) is a way for machines to learn from data that is kept locally, in order to protect the privacy of clients. This is typically done using local SGD, which helps to improve communication efficiency. However, such a scheme is currently constrained by slow and unstable convergence due to the variety of data on different clients’ devices. In this work, we identify three under-explored phenomena of biased local learning that may explain these challenges caused by local updates in supervised FL. As a remedy, we propose FedBR, a novel unified algorithm that reduces the local learning bias on features and classifiers to tackle these challenges. FedBR has two components. The first component helps to reduce bias in local classifiers by balancing the output of the models. The second component helps to learn local features that are similar to global features, but different from those learned from other data sources. We conducted several experiments to test FedBR and found that it consistently outperforms other SOTA FL methods. Both of its components also individually show performance gains. Our code is available at https://github.com/lins-lab/fedbr. |
https://proceedings.mlr.press/v202/guo23h.html | https://proceedings.mlr.press/v202/guo23h/guo23h.pdf | https://openreview.net/forum?id=G3vwtUqvrk | Hierarchical Grammar-Induced Geometry for Data-Efficient Molecular Property Prediction | https://proceedings.mlr.press/v202/guo23h.html | Minghao Guo, Veronika Thost, Samuel W Song, Adithya Balachandran, Payel Das, Jie Chen, Wojciech Matusik | https://proceedings.mlr.press/v202/guo23h.html | ICML 2023 | The prediction of molecular properties is a crucial task in the field of material and drug discovery. The potential benefits of using deep learning techniques are reflected in the wealth of recent literature. Still, these techniques are faced with a common challenge in practice: Labeled data are limited by the cost of manual extraction from literature and laborious experimentation. In this work, we propose a data-efficient property predictor by utilizing a learnable hierarchical molecular grammar that can generate molecules from grammar production rules. Such a grammar induces an explicit geometry of the space of molecular graphs, which provides an informative prior on molecular structural similarity. The property prediction is performed using graph neural diffusion over the grammar-induced geometry. On both small and large datasets, our evaluation shows that this approach outperforms a wide spectrum of baselines, including supervised and pre-trained graph neural networks. We include a detailed ablation study and further analysis of our solution, showing its effectiveness in cases with extremely limited data. |
https://proceedings.mlr.press/v202/guo23i.html | https://proceedings.mlr.press/v202/guo23i/guo23i.pdf | https://openreview.net/forum?id=UjQIoJv927 | Graph Neural Networks with Learnable and Optimal Polynomial Bases | https://proceedings.mlr.press/v202/guo23i.html | Yuhe Guo, Zhewei Wei | https://proceedings.mlr.press/v202/guo23i.html | ICML 2023 | Polynomial filters, a kind of Graph Neural Networks, typically use a predetermined polynomial basis and learn the coefficients from the training data. It has been observed that the effectiveness of the model is highly dependent on the property of the polynomial basis. Consequently, two natural and fundamental questions arise: Can we learn a suitable polynomial basis from the training data? Can we determine the optimal polynomial basis for a given graph and node features? In this paper, we propose two spectral GNN models that provide positive answers to the questions posed above. First, inspired by Favard’s Theorem, we propose the FavardGNN model, which learns a polynomial basis from the space of all possible orthonormal bases. Second, we examine the supposedly unsolvable definition of optimal polynomial basis from Wang et al. (2022) and propose a simple model, OptBasisGNN, which computes the optimal basis for a given graph structure and graph signal. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models. Our code is available at https://github.com/yuziGuo/FarOptBasis. |
https://proceedings.mlr.press/v202/guo23j.html | https://proceedings.mlr.press/v202/guo23j/guo23j.pdf | https://openreview.net/forum?id=6XwCseSnww | LongCoder: A Long-Range Pre-trained Language Model for Code Completion | https://proceedings.mlr.press/v202/guo23j.html | Daya Guo, Canwen Xu, Nan Duan, Jian Yin, Julian Mcauley | https://proceedings.mlr.press/v202/guo23j.html | ICML 2023 | In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference. |
https://proceedings.mlr.press/v202/guo23k.html | https://proceedings.mlr.press/v202/guo23k/guo23k.pdf | https://openreview.net/forum?id=DDwSa7XDxA | Estimating Heterogeneous Treatment Effects: Mutual Information Bounds and Learning Algorithms | https://proceedings.mlr.press/v202/guo23k.html | Xingzhuo Guo, Yuchen Zhang, Jianmin Wang, Mingsheng Long | https://proceedings.mlr.press/v202/guo23k.html | ICML 2023 | Estimating heterogeneous treatment effects (HTE) from observational studies is rising in importance due to the widespread accumulation of data in many fields. Due to the selection bias behind the inaccessibility of counterfactual data, the problem differs fundamentally from supervised learning in a challenging way. However, existing works on modeling selection bias and corresponding algorithms do not naturally generalize to non-binary treatment spaces. To address this limitation, we propose to use mutual information to describe selection bias in estimating HTE and derive a novel error bound using the mutual information between the covariates and the treatments, which is the first error bound to cover general treatment schemes including multinoulli or continuous spaces. We then bring forth theoretically justified algorithms, the Mutual Information Treatment Network (MitNet), using adversarial optimization to reduce selection bias and obtain more accurate HTE estimations. Our algorithm reaches remarkable performance in both simulation study and empirical evaluation. |
https://proceedings.mlr.press/v202/guo23l.html | https://proceedings.mlr.press/v202/guo23l/guo23l.pdf | https://openreview.net/forum?id=VOcPCpmEnZ | Identifying Useful Learnwares for Heterogeneous Label Spaces | https://proceedings.mlr.press/v202/guo23l.html | Lan-Zhe Guo, Zhi Zhou, Yu-Feng Li, Zhi-Hua Zhou | https://proceedings.mlr.press/v202/guo23l.html | ICML 2023 | The learnware paradigm aims to build a learnware market containing numerous learnwares, each of which is a well-performing machine learning model with a corresponding specification to describe its functionality so that future users can identify useful models for reuse according to their own requirements. With the learnware paradigm, model developers can spontaneously submit models to the market without leaking data privacy, and users can leverage models in the market to accomplish different machine learning tasks without having to build models from scratch. Recent studies have attempted to realize the model specification through Reduced Kernel Mean Embedding (RKME). In this paper, we make an attempt to improve the effectiveness of RKME specification for heterogeneous label spaces, where the learnware market does not contain a model that has the same label space as the user’s task, by considering a class-specific model specification explicitly, along with a class-wise learnware identification method. Both theoretical and empirical analyses show that our proposal can quickly and accurately find useful learnwares that satisfy users’ requirements. Moreover, we find that for a specific task, reusing a small model identified via the specification performs better than directly reusing a pre-trained generic big model. |
https://proceedings.mlr.press/v202/gupta23a.html | https://proceedings.mlr.press/v202/gupta23a/gupta23a.pdf | https://openreview.net/forum?id=wGgIcftFzm | High-dimensional Location Estimation via Norm Concentration for Subgamma Vectors | https://proceedings.mlr.press/v202/gupta23a.html | Shivam Gupta, Jasper C.H. Lee, Eric Price | https://proceedings.mlr.press/v202/gupta23a.html | ICML 2023 | In location estimation, we are given $n$ samples from a known distribution $f$ shifted by an unknown translation $\lambda$, and want to estimate $\lambda$ as precisely as possible. Asymptotically, the maximum likelihood estimate achieves the Cramér-Rao bound of error $\mathcal N(0, \frac{1}{n\mathcal I})$, where $\mathcal I$ is the Fisher information of $f$. However, the $n$ required for convergence depends on $f$, and may be arbitrarily large. We build on the theory using smoothed estimators to bound the error for finite $n$ in terms of $\mathcal I_r$, the Fisher information of the $r$-smoothed distribution. As $n \to \infty$, $r \to 0$ at an explicit rate and this converges to the Cramér-Rao bound. We (1) improve the prior work for 1-dimensional $f$ to converge for constant failure probability in addition to high probability, and (2) extend the theory to high-dimensional distributions. In the process, we prove a new bound on the norm of a high-dimensional random variable whose 1-dimensional projections are subgamma, which may be of independent interest. |
https://proceedings.mlr.press/v202/gupta23b.html | https://proceedings.mlr.press/v202/gupta23b/gupta23b.pdf | https://openreview.net/forum?id=pNi4q28UyI | GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets | https://proceedings.mlr.press/v202/gupta23b.html | Shubham Gupta, Sahil Manchanda, Sayan Ranu, Srikanta J. Bedathur | https://proceedings.mlr.press/v202/gupta23b.html | ICML 2023 | Graph neural networks (GNNs), in general, are built on the assumption of a static set of features characterizing each node in a graph. This assumption is often violated in practice. Existing methods partly address this issue through feature imputation. However, these techniques (i) assume uniformity of feature set across nodes, (ii) are transductive by nature, and (iii) fail to work when features are added or removed over time. In this work, we address these limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a novel allotropic transformation on the original graph, wherein the nodes and features are decoupled through a bipartite encoding. Through a carefully chosen message passing framework on the allotropic transformation, we make the model parameter size independent of the number of features and thereby inductive to both unseen nodes and features. We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen features does not come at the cost of expressivity. In addition, as demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN with high empirical efficacy and the ability to learn in continual fashion over streaming feature sets. |
https://proceedings.mlr.press/v202/gupta23c.html | https://proceedings.mlr.press/v202/gupta23c/gupta23c.pdf | https://openreview.net/forum?id=ZXXPQ8GptX | Online Platt Scaling with Calibeating | https://proceedings.mlr.press/v202/gupta23c.html | Chirag Gupta, Aaditya Ramdas | https://proceedings.mlr.press/v202/gupta23c.html | ICML 2023 | We present an online post-hoc calibration method, called Online Platt Scaling (OPS), which combines the Platt scaling technique with online logistic regression. We demonstrate that OPS smoothly adapts between i.i.d. and non-i.i.d. settings with distribution drift. Further, in scenarios where the best Platt scaling model is itself miscalibrated, we enhance OPS by incorporating a recently developed technique called calibeating to make it more robust. Theoretically, our resulting OPS+calibeating method is guaranteed to be calibrated for adversarial outcome sequences. Empirically, it is effective on a range of synthetic and real-world datasets, with and without distribution drifts, achieving superior performance without hyperparameter tuning. Finally, we extend all OPS ideas to the beta scaling method. |
https://proceedings.mlr.press/v202/gurulingan23a.html | https://proceedings.mlr.press/v202/gurulingan23a/gurulingan23a.pdf | https://openreview.net/forum?id=LZhwwe7j9l | Multi-Task Structural Learning using Local Task Similarity induced Neuron Creation and Removal | https://proceedings.mlr.press/v202/gurulingan23a.html | Naresh Kumar Gurulingan, Bahram Zonooz, Elahe Arani | https://proceedings.mlr.press/v202/gurulingan23a.html | ICML 2023 | Multi-task learning has the potential to improve generalization by maximizing positive transfer between tasks while reducing task interference. Fully achieving this potential is hindered by manually designed architectures that remain static throughout training. On the contrary, learning in the brain occurs through structural changes that are in tandem with changes in synaptic strength. Thus, we propose Multi-Task Structural Learning (MTSL) that simultaneously learns the multi-task architecture and its parameters. MTSL begins with an identical single-task network for each task and alternates between a task-learning phase and a structural-learning phase. In the task learning phase, each network specializes in the corresponding task. In each of the structural learning phases, starting from the earliest layer, locally similar task layers first transfer their knowledge to a newly created group layer before being removed. MTSL then uses the group layer in place of the corresponding removed task layers and moves on to the next layers. Our empirical results show that MTSL achieves competitive generalization with various baselines and improves robustness to out-of-distribution data. |
https://proceedings.mlr.press/v202/guth23a.html | https://proceedings.mlr.press/v202/guth23a/guth23a.pdf | https://openreview.net/forum?id=WHVHiOR3XQ | Conditionally Strongly Log-Concave Generative Models | https://proceedings.mlr.press/v202/guth23a.html | Florentin Guth, Etienne Lempereur, Joan Bruna, Stéphane Mallat | https://proceedings.mlr.press/v202/guth23a.html | ICML 2023 | There is a growing gap between the impressive results of deep image generative models and classical algorithms that offer theoretical guarantees. The former suffer from mode collapse or memorization issues, limiting their application to scientific data. The latter require restrictive assumptions such as log-concavity to escape the curse of dimensionality. We partially bridge this gap by introducing conditionally strongly log-concave (CSLC) models, which factorize the data distribution into a product of conditional probability distributions that are strongly log-concave. This factorization is obtained with orthogonal projectors adapted to the data distribution. It leads to efficient parameter estimation and sampling algorithms, with theoretical guarantees, although the data distribution is not globally log-concave. We show that several challenging multiscale processes are conditionally log-concave using wavelet packet orthogonal projectors. Numerical results are shown for physical fields such as the $\varphi^4$ model and weak lensing convergence maps with higher resolution than in previous works. |
https://proceedings.mlr.press/v202/gutteridge23a.html | https://proceedings.mlr.press/v202/gutteridge23a/gutteridge23a.pdf | https://openreview.net/forum?id=WEgjbJ6IDN | DRew: Dynamically Rewired Message Passing with Delay | https://proceedings.mlr.press/v202/gutteridge23a.html | Benjamin Gutteridge, Xiaowen Dong, Michael M. Bronstein, Francesco Di Giovanni | https://proceedings.mlr.press/v202/gutteridge23a.html | ICML 2023 | Message passing neural networks (MPNNs) have been shown to suffer from the phenomenon of over-squashing that causes poor performance for tasks relying on long-range interactions. This can be largely attributed to message passing only occurring locally, over a node’s immediate neighbours. Rewiring approaches attempting to make graphs ’more connected’, and supposedly better suited to long-range tasks, often lose the inductive bias provided by distance on the graph since they make distant nodes communicate instantly at every layer. In this paper we propose a framework, applicable to any MPNN architecture, that performs a layer-dependent rewiring to ensure gradual densification of the graph. We also propose a delay mechanism that permits skip connections between nodes depending on the layer and their mutual distance. We validate our approach on several long-range tasks and show that it outperforms graph Transformers and multi-hop MPNNs. |
https://proceedings.mlr.press/v202/guyomard23a.html | https://proceedings.mlr.press/v202/guyomard23a/guyomard23a.pdf | https://openreview.net/forum?id=XW4R4LVKhw | Kernel Logistic Regression Approximation of an Understandable ReLU Neural Network | https://proceedings.mlr.press/v202/guyomard23a.html | Marie Guyomard, Susana Barbosa, Lionel Fillatre | https://proceedings.mlr.press/v202/guyomard23a.html | ICML 2023 | This paper proposes an understandable neural network whose score function is modeled as an additive sum of univariate spline functions. It extends usual understandable models like generative additive models, spline-based models, and neural additive models. It is shown that this neural network can be approximated by a logistic regression whose inputs are obtained with a non-linear preprocessing of input data. This preprocessing depends on the neural network initialization but this paper establishes that it can be replaced by a non random kernel-based preprocessing that no longer depends on the initialization. Hence, the convergence of the training process is guaranteed and the solution is unique for a given training dataset. |
https://proceedings.mlr.press/v202/h-zargarbashi23a.html | https://proceedings.mlr.press/v202/h-zargarbashi23a/h-zargarbashi23a.pdf | https://openreview.net/forum?id=zGf8J0bNfX | Conformal Prediction Sets for Graph Neural Networks | https://proceedings.mlr.press/v202/h-zargarbashi23a.html | Soroush H. Zargarbashi, Simone Antonelli, Aleksandar Bojchevski | https://proceedings.mlr.press/v202/h-zargarbashi23a.html | ICML 2023 | Despite the widespread use of graph neural networks (GNNs) we lack methods to reliably quantify their uncertainty. We propose a conformal procedure to equip GNNs with prediction sets that come with distribution-free guarantees – the output set contains the true label with arbitrarily high probability. Our post-processing procedure can wrap around any (pretrained) GNN, and unlike existing methods, results in meaningful sets even when the model provides only the top class. The key idea is to diffuse the node-wise conformity scores to incorporate neighborhood information. By leveraging the network homophily we construct sets with comparable or better efficiency (average size) and significantly improved singleton hit ratio (correct sets of size one). In addition to an extensive empirical evaluation, we investigate the theoretical conditions under which smoothing provably improves efficiency. |
https://proceedings.mlr.press/v202/ha23a.html | https://proceedings.mlr.press/v202/ha23a/ha23a.pdf | https://openreview.net/forum?id=hbgD1Wdcaq | Social learning spontaneously emerges by searching optimal heuristics with deep reinforcement learning | https://proceedings.mlr.press/v202/ha23a.html | Seungwoong Ha, Hawoong Jeong | https://proceedings.mlr.press/v202/ha23a.html | ICML 2023 | How have individuals of social animals in nature evolved to learn from each other, and what would be the optimal strategy for such learning in a specific environment? Here, we address both problems by employing a deep reinforcement learning model to optimize the social learning strategies (SLSs) of agents in a cooperative game in a multi-dimensional landscape. Throughout the training for maximizing the overall payoff, we find that the agent spontaneously learns various concepts of social learning, such as copying, focusing on frequent and well-performing neighbors, self-comparison, long-term cooperation between agents, and the importance of balancing between individual and social learning, without any explicit guidance or prior knowledge about the system. The SLS from a fully trained agent outperforms all of the traditional, baseline SLSs in terms of mean payoff. We demonstrate the superior performance of the reinforcement learning agent in various environments, including temporally changing environments and real social networks, which also verifies the adaptability of our framework to different social settings. |
https://proceedings.mlr.press/v202/haider23a.html | https://proceedings.mlr.press/v202/haider23a/haider23a.pdf | https://openreview.net/forum?id=ExwHyYdsmT | Convex Geometry of ReLU-layers, Injectivity on the Ball and Local Reconstruction | https://proceedings.mlr.press/v202/haider23a.html | Daniel Haider, Martin Ehler, Peter Balazs | https://proceedings.mlr.press/v202/haider23a.html | ICML 2023 | The paper uses a frame-theoretic setting to study the injectivity of a ReLU-layer on the closed ball of $\mathbb{R}^n$ and its non-negative part. In particular, the interplay between the radius of the ball and the bias vector is emphasized. Together with a perspective from convex geometry, this leads to a computationally feasible method of verifying the injectivity of a ReLU-layer under reasonable restrictions in terms of an upper bound of the bias vector. Explicit reconstruction formulas are provided, inspired by the duality concept from frame theory. All this gives rise to the possibility of quantifying the invertibility of a ReLU-layer and a concrete reconstruction algorithm for any input vector on the ball. |
https://proceedings.mlr.press/v202/hamman23a.html | https://proceedings.mlr.press/v202/hamman23a/hamman23a.pdf | https://openreview.net/forum?id=gzjK23oK9i | Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees | https://proceedings.mlr.press/v202/hamman23a.html | Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta | https://proceedings.mlr.press/v202/hamman23a.html | ICML 2023 | There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly. Towards finding robust counterfactuals, existing literature often assumes that the original model $m$ and the new model $M$ are bounded in the parameter space, i.e., $\|\text{Params}(M){-}\text{Params}(m)\|{<}\Delta$. However, models can often change significantly in the parameter space with little to no change in their predictions or accuracy on the given dataset. In this work, we introduce a mathematical abstraction termed naturally-occurring model change, which allows for arbitrary changes in the parameter space such that the change in predictions on points that lie on the data manifold is limited. Next, we propose a measure – that we call Stability – to quantify the robustness of counterfactuals to potential model changes for differentiable models, e.g., neural networks. Our main contribution is to show that counterfactuals with sufficiently high value of Stability as defined by our measure will remain valid after potential “naturally-occurring” model changes with high probability (leveraging concentration bounds for Lipschitz function of independent Gaussians). Since our quantification depends on the local Lipschitz constant around a data point which is not always available, we also examine practical relaxations of our proposed measure and demonstrate experimentally how they can be incorporated to find robust counterfactuals for neural networks that are close, realistic, and remain valid after potential model changes. |
https://proceedings.mlr.press/v202/han23a.html | https://proceedings.mlr.press/v202/han23a/han23a.pdf | https://openreview.net/forum?id=VorD7k3Ldh | Wrapped Cauchy Distributed Angular Softmax for Long-Tailed Visual Recognition | https://proceedings.mlr.press/v202/han23a.html | Boran Han | https://proceedings.mlr.press/v202/han23a.html | ICML 2023 | Addressing imbalanced or long-tailed data is a major challenge in visual recognition tasks due to disparities between training and testing distributions and issues with data noise. We propose the Wrapped Cauchy Distributed Angular Softmax (WCDAS), a novel softmax function that incorporates data-wise Gaussian-based kernels into the angular correlation between feature representations and classifier weights, effectively mitigating noise and sparse sampling concerns. The class-wise distribution of angular representation becomes a sum of these kernels. Our theoretical analysis reveals that the wrapped Cauchy distribution excels the Gaussian distribution in approximating mixed distributions. Additionally, WCDAS uses trainable concentration parameters to dynamically adjust the compactness and margin of each class. Empirical results confirm label-aware behavior in these parameters and demonstrate WCDAS’s superiority over other state-of-the-art softmax-based methods in handling long-tailed visual recognition across multiple benchmark datasets. The code is public available. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.