abs
stringlengths 45
62
| Download PDF
stringlengths 50
84
| OpenReview
stringlengths 42
42
| title
stringlengths 10
168
| url
stringlengths 45
62
| authors
stringlengths 9
704
| detail_url
stringlengths 45
62
| tags
stringclasses 1
value | abstract
stringlengths 415
5.03k
⌀ |
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v202/deng23c.html | https://proceedings.mlr.press/v202/deng23c/deng23c.pdf | https://openreview.net/forum?id=UdiUd99I81 | Multi-channel Autobidding with Budget and ROI Constraints | https://proceedings.mlr.press/v202/deng23c.html | Yuan Deng, Negin Golrezaei, Patrick Jaillet, Jason Cheuk Nam Liang, Vahab Mirrokni | https://proceedings.mlr.press/v202/deng23c.html | ICML 2023 | In digital online advertising, advertisers procure ad impressions simultaneously on multiple platforms, or so-called channels, such as Google Ads, Meta Ads Manager, etc., each of which consists of numerous ad auctions. We study how an advertiser maximizes total conversion (e.g. ad clicks) while satisfying aggregate return-on-investment (ROI) and budget constraints across all channels. In practice, an advertiser does not have control over, and thus cannot globally optimize, which individual ad auctions she participates in for each channel, and instead authorizes a channel to procure impressions on her behalf: the advertiser can only utilize two levers on each channel, namely setting a per-channel budget and per-channel target ROI. In this work, we first analyze the effectiveness of each of these levers for solving the advertiser’s global multi-channel problem. We show that when an advertiser only optimizes over per-channel ROIs, her total conversion can be arbitrarily worse than what she could have obtained in the global problem. Further, we show that the advertiser can achieve the global optimal conversion when she only optimizes over per-channel budgets. In light of this finding, under a bandit feedback setting that mimics real-world scenarios where advertisers have limited information on ad auctions in each channels and how channels procure ads, we present an efficient learning algorithm that produces per-channel budgets whose resulting conversion approximates that of the global optimal problem. |
https://proceedings.mlr.press/v202/deng23d.html | https://proceedings.mlr.press/v202/deng23d/deng23d.pdf | https://openreview.net/forum?id=zRkz4duLKp | Surrogate Module Learning: Reduce the Gradient Error Accumulation in Training Spiking Neural Networks | https://proceedings.mlr.press/v202/deng23d.html | Shikuang Deng, Hao Lin, Yuhang Li, Shi Gu | https://proceedings.mlr.press/v202/deng23d.html | ICML 2023 | Spiking neural networks provide an alternative solution to conventional artificial neural networks with energy-saving and high-efficiency characteristics after hardware implantation. However, due to its non-differentiable activation function and the temporally delayed accumulation in outputs, the direct training of SNNs is extraordinarily tough even adopting a surrogate gradient to mimic the backpropagation. For SNN training, this non-differentiability causes the intrinsic gradient error that would be magnified through layerwise backpropagation, especially through multiple layers. In this paper, we propose a novel approach to reducing gradient error from a new perspective called surrogate module learning (SML). Surrogate module learning tries to construct a shortcut path to back-propagate more accurate gradient to a certain SNN part utilizing the surrogate modules. Then, we develop a new loss function for concurrently training the network and enhancing the surrogate modules’ surrogate capacity. We demonstrate that when the outputs of surrogate modules are close to the SNN output, the fraction of the gradient error drops significantly. Our method consistently and significantly enhances the performance of SNNs on all experiment datasets, including CIFAR-10/100, ImageNet, and ES-ImageNet. For example, for spiking ResNet-34 architecture on ImageNet, we increased the SNN accuracy by 3.46%. |
https://proceedings.mlr.press/v202/deng23e.html | https://proceedings.mlr.press/v202/deng23e/deng23e.pdf | https://openreview.net/forum?id=VlYcGv9fwE | Confidence and Dispersity Speak: Characterizing Prediction Matrix for Unsupervised Accuracy Estimation | https://proceedings.mlr.press/v202/deng23e.html | Weijian Deng, Yumin Suh, Stephen Gould, Liang Zheng | https://proceedings.mlr.press/v202/deng23e.html | ICML 2023 | This work aims to assess how well a model performs under distribution shifts without using labels. While recent methods study prediction confidence, this work reports prediction dispersity is another informative cue. Confidence reflects whether the individual prediction is certain; dispersity indicates how the overall predictions are distributed across all categories. Our key insight is that a well-performing model should give predictions with high confidence and high dispersity. That is, we need to consider both properties so as to make more accurate estimates. To this end, we use nuclear norm that has been shown to be effective in characterizing both properties. Extensive experiments validate the effectiveness of nuclear norm for various models (e.g., ViT and ConvNeXt), different datasets (e.g., ImageNet and CUB-200), and diverse types of distribution shifts (e.g., style shift and reproduction shift). We show that nuclear norm is more accurate and robust in accuracy estimation than existing methods. Furthermore, we validate the feasibility of other measurements (e.g., mutual information maximization) for characterizing dispersity and confidence. Lastly, we investigate the limitation of the nuclear norm, study its improved variant under severe class imbalance, and discuss potential directions. |
https://proceedings.mlr.press/v202/deng23f.html | https://proceedings.mlr.press/v202/deng23f/deng23f.pdf | https://openreview.net/forum?id=UfZuIrHhRu | Great Models Think Alike: Improving Model Reliability via Inter-Model Latent Agreement | https://proceedings.mlr.press/v202/deng23f.html | Ailin Deng, Miao Xiong, Bryan Hooi | https://proceedings.mlr.press/v202/deng23f.html | ICML 2023 | Reliable application of machine learning is of primary importance to the practical deployment of deep learning methods. A fundamental challenge is that models are often unreliable due to overconfidence. In this paper, we estimate a model’s reliability by measuring the agreement between its latent space, and the latent space of a foundation model. However, it is challenging to measure the agreement between two different latent spaces due to their incoherence, e.g., arbitrary rotations and different dimensionality. To overcome this incoherence issue, we design a neighborhood agreement measure between latent spaces and find that this agreement is surprisingly well-correlated with the reliability of a model’s predictions. Further, we show that fusing neighborhood agreement into a model’s predictive confidence in a post-hoc way significantly improves its reliability. Theoretical analysis and extensive experiments on failure detection across various datasets verify the effectiveness of our method on both in-distribution and out-of-distribution settings. |
https://proceedings.mlr.press/v202/desai23a.html | https://proceedings.mlr.press/v202/desai23a/desai23a.pdf | https://openreview.net/forum?id=mDHWy6zwzD | Hyperbolic Image-text Representations | https://proceedings.mlr.press/v202/desai23a.html | Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson, Shanmukha Ramakrishna Vedantam | https://proceedings.mlr.press/v202/desai23a.html | ICML 2023 | Visual and linguistic concepts naturally organize themselves in a hierarchy, where a textual concept "dog" entails all images that contain dogs. Despite being intuitive, current large-scale vision and language models such as CLIP do not explicitly capture such hierarchy. We propose MERU, a contrastive model that yields hyperbolic representations of images and text. Hyperbolic spaces have suitable geometric properties to embed tree-like data, so MERU can better capture the underlying hierarchy in image-text datasets. Our results show that MERU learns a highly interpretable and structured representation space while being competitive with CLIP’s performance on standard multi-modal tasks like image classification and image-text retrieval. |
https://proceedings.mlr.press/v202/desai23b.html | https://proceedings.mlr.press/v202/desai23b/desai23b.pdf | https://openreview.net/forum?id=kvvHeldUfw | Hardware-Aware Compression with Random Operation Access Specific Tile (ROAST) Hashing | https://proceedings.mlr.press/v202/desai23b.html | Aditya Desai, Keren Zhou, Anshumali Shrivastava | https://proceedings.mlr.press/v202/desai23b.html | ICML 2023 | Advancements in deep learning are often associated with increasing model sizes. Training and deploying large models require sophisticated hardware and incur significantly higher costs. Thus, model compression is a widely explored approach to solving the problem. However, SOTA techniques fall short in one or more desirable aspects of compression - for instance, pruning does not reduce memory for training, quantization can only provide up to 32$\times$ compression, HashedNet is cache-inefficient, etc. This paper proposes a model-agnostic, cache-friendly, and hardware-aware model compression approach: Random Operation Access Specific Tile (ROAST) hashing. ROAST collapses the parameters by clubbing them through a lightweight mapping. While clubbing these parameters, ROAST utilizes cache hierarchies by aligning the memory access pattern with the parameter access pattern. ROAST is up to ${\sim}25\times$ faster to train and ${\sim}50\times$ faster to infer than the popular parameter sharing method HashedNet. Additionally, ROAST introduces global weight sharing, which is empirically and theoretically superior to local weight sharing in HashedNet, and can be of independent interest. With ROAST, we can efficiently train and deploy the model using a much smaller memory footprint ($\sim 10 - 100\times$ lesser) in text and image classification tasks. ROAST-MM kernel implementation is open-source (https://github.com/apd10/RzLinear/tree/stable) |
https://proceedings.mlr.press/v202/dettmers23a.html | https://proceedings.mlr.press/v202/dettmers23a/dettmers23a.pdf | https://openreview.net/forum?id=i8tGb1ab1j | The case for 4-bit precision: k-bit Inference Scaling Laws | https://proceedings.mlr.press/v202/dettmers23a.html | Tim Dettmers, Luke Zettlemoyer | https://proceedings.mlr.press/v202/dettmers23a.html | ICML 2023 | Quantization methods reduce the number of bits required to represent each parameter in a model, trading accuracy for smaller memory footprints and inference latencies. However, the final model size depends on both the number of parameters of the original model and the rate of compression. For example, a 30B 8-bit model and a 60B 4-bit model have the same number of bits but may have very different zero-shot accuracies. In this work, we study this trade-off by developing inference scaling laws of zero-shot performance in Large Language Models (LLMs) to determine the bit-precision and model size that maximizes zero-shot performance. We run more than 35,000 experiments with 16-bit inputs and k-bit parameters to examine which zero-shot quantization methods improve scaling for 3 to 8-bit precision at scales of 19M to 176B parameters across the LLM families BLOOM, OPT, NeoX/Pythia, and GPT-2. We find that it is challenging to improve the bit-level scaling trade-off, with the only improvements being the use of a small block size – splitting the parameters into small independently quantized blocks – and the quantization data type being used (e.g., Int vs Float). Overall, our findings show that 4-bit precision is almost universally optimal for total model bits and zero-shot accuracy. |
https://proceedings.mlr.press/v202/devic23a.html | https://proceedings.mlr.press/v202/devic23a/devic23a.pdf | https://openreview.net/forum?id=oDR4MurRrT | Fairness in Matching under Uncertainty | https://proceedings.mlr.press/v202/devic23a.html | Siddartha Devic, David Kempe, Vatsal Sharan, Aleksandra Korolova | https://proceedings.mlr.press/v202/devic23a.html | ICML 2023 | The prevalence and importance of algorithmic two-sided marketplaces has drawn attention to the issue of fairness in such settings. Algorithmic decisions are used in assigning students to schools, users to advertisers, and applicants to job interviews. These decisions should heed the preferences of individuals, and simultaneously be fair with respect to their merits (synonymous with fit, future performance, or need). Merits conditioned on observable features are always uncertain, a fact that is exacerbated by the widespread use of machine learning algorithms to infer merit from the observables. As our key contribution, we carefully axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits; indeed, it simultaneously recognizes uncertainty as the primary potential cause of unfairness and an approach to address it. We design a linear programming framework to find fair utility-maximizing distributions over allocations, and we show that the linear program is robust to perturbations in the estimated parameters of the uncertain merit distributions, a key property in combining the approach with machine learning techniques. |
https://proceedings.mlr.press/v202/dhawan23a.html | https://proceedings.mlr.press/v202/dhawan23a/dhawan23a.pdf | https://openreview.net/forum?id=3Ky05033V7 | Efficient Parametric Approximations of Neural Network Function Space Distance | https://proceedings.mlr.press/v202/dhawan23a.html | Nikita Dhawan, Sicong Huang, Juhan Bae, Roger Baker Grosse | https://proceedings.mlr.press/v202/dhawan23a.html | ICML 2023 | It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset. As a specific case, we consider estimating the Function Space Distance (FSD) over a training set, i.e. the average discrepancy between the outputs of two neural networks. We propose a Linearized Activation Function TRick (LAFTR) and derive an efficient approximation to FSD for ReLU neural networks. The key idea is to approximate the architecture as a linear network with stochastic gating. Despite requiring only one parameter per unit of the network, our approach outcompetes other parametric approximations with larger memory requirements. Applied to continual learning, our parametric approximation is competitive with state-of-the-art nonparametric approximations, which require storing many training examples. Furthermore, we show its efficacy in estimating influence functions accurately and detecting mislabeled examples without expensive iterations over the entire dataset. |
https://proceedings.mlr.press/v202/dheur23a.html | https://proceedings.mlr.press/v202/dheur23a/dheur23a.pdf | https://openreview.net/forum?id=3fNVNNyKyV | A Large-Scale Study of Probabilistic Calibration in Neural Network Regression | https://proceedings.mlr.press/v202/dheur23a.html | Victor Dheur, Souhaib Ben Taieb | https://proceedings.mlr.press/v202/dheur23a.html | ICML 2023 | Accurate probabilistic predictions are essential for optimal decision making. While neural network miscalibration has been studied primarily in classification, we investigate this in the less-explored domain of regression. We conduct the largest empirical study to date to assess the probabilistic calibration of neural networks. We also analyze the performance of recalibration, conformal, and regularization methods to enhance probabilistic calibration. Additionally, we introduce novel differentiable recalibration and regularization methods, uncovering new insights into their effectiveness. Our findings reveal that regularization methods offer a favorable tradeoff between calibration and sharpness. Post-hoc methods exhibit superior probabilistic calibration, which we attribute to the finite-sample coverage guarantee of conformal prediction. Furthermore, we demonstrate that quantile recalibration can be considered as a specific case of conformal prediction. Our study is fully reproducible and implemented in a common code base for fair comparisons. |
https://proceedings.mlr.press/v202/di23a.html | https://proceedings.mlr.press/v202/di23a/di23a.pdf | https://openreview.net/forum?id=QSiY62elrJ | Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path | https://proceedings.mlr.press/v202/di23a.html | Qiwei Di, Jiafan He, Dongruo Zhou, Quanquan Gu | https://proceedings.mlr.press/v202/di23a.html | ICML 2023 | We study the Stochastic Shortest Path (SSP) problem with a linear mixture transition kernel, where an agent repeatedly interacts with a stochastic environment and seeks to reach certain goal state while minimizing the cumulative cost. Existing works often assume a strictly positive lower bound of the cost function or an upper bound of the expected length for the optimal policy. In this paper, we propose a new algorithm to eliminate these restrictive assumptions. Our algorithm is based on extended value iteration with a fine-grained variance-aware confidence set, where the variance is estimated recursively from high-order moments. Our algorithm achieves an $\tilde{\mathcal{O}}(dB_*\sqrt{K})$ regret bound, where $d$ is the dimension of the feature mapping in the linear transition kernel, $B_*$ is the upper bound of the total cumulative cost for the optimal policy, and $K$ is the number of episodes. Our regret upper bound matches the $\Omega(dB_*\sqrt{K})$ lower bound of linear mixture SSPs in Min et al. (2022), which suggests that our algorithm is nearly minimax optimal. |
https://proceedings.mlr.press/v202/di-giovanni23a.html | https://proceedings.mlr.press/v202/di-giovanni23a/di-giovanni23a.pdf | https://openreview.net/forum?id=t2tTfWwAEl | On Over-Squashing in Message Passing Neural Networks: The Impact of Width, Depth, and Topology | https://proceedings.mlr.press/v202/di-giovanni23a.html | Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio, Michael M. Bronstein | https://proceedings.mlr.press/v202/di-giovanni23a.html | ICML 2023 | Message Passing Neural Networks (MPNNs) are instances of Graph Neural Networks that leverage the graph to send messages over the edges. This inductive bias leads to a phenomenon known as over-squashing, where a node feature is insensitive to information contained at distant nodes. Despite recent methods introduced to mitigate this issue, an understanding of the causes for over-squashing and of possible solutions are lacking. In this theoretical work, we prove that: (i) Neural network width can mitigate over-squashing, but at the cost of making the whole network more sensitive; (ii) Conversely, depth cannot help mitigate over-squashing: increasing the number of layers leads to over-squashing being dominated by vanishing gradients; (iii) The graph topology plays the greatest role, since over-squashing occurs between nodes at high commute time. Our analysis provides a unified framework to study different recent methods introduced to cope with over-squashing and serves as a justification for a class of methods that fall under graph rewiring. |
https://proceedings.mlr.press/v202/diakonikolas23a.html | https://proceedings.mlr.press/v202/diakonikolas23a/diakonikolas23a.pdf | https://openreview.net/forum?id=1RFpQOU8Jv | Nearly-Linear Time and Streaming Algorithms for Outlier-Robust PCA | https://proceedings.mlr.press/v202/diakonikolas23a.html | Ilias Diakonikolas, Daniel Kane, Ankit Pensia, Thanasis Pittas | https://proceedings.mlr.press/v202/diakonikolas23a.html | ICML 2023 | We study principal component analysis (PCA), where given a dataset in $\mathbb R^d$ from a distribution, the task is to find a unit vector $v$ that approximately maximizes the variance of the distribution after being projected along $v$. Despite being a classical task, standard estimators fail drastically if the data contains even a small fraction of outliers, motivating the problem of robust PCA. Recent work has developed computationally-efficient algorithms for robust PCA that either take super-linear time or have sub-optimal error guarantees. Our main contribution is to develop a nearly linear time algorithm for robust PCA with near-optimal error guarantees. We also develop a single-pass streaming algorithm for robust PCA with memory usage nearly-linear in the dimension. |
https://proceedings.mlr.press/v202/diakonikolas23b.html | https://proceedings.mlr.press/v202/diakonikolas23b/diakonikolas23b.pdf | https://openreview.net/forum?id=ZDCcGnQhCt | Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals | https://proceedings.mlr.press/v202/diakonikolas23b.html | Ilias Diakonikolas, Daniel Kane, Lisheng Ren | https://proceedings.mlr.press/v202/diakonikolas23b.html | ICML 2023 | We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples $(\\mathbf{x},y)$ from an unknown distribution on $\\mathbb{R}^n \\times \\{\pm 1 \\}$, whose marginal distribution on $\\mathbf{x}$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with 0-1 loss $\\mathrm{OPT}+\\epsilon$, where $\\mathrm{OPT}$ is the 0-1 loss of the best-fitting halfspace. We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are either qualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression. |
https://proceedings.mlr.press/v202/diamant23a.html | https://proceedings.mlr.press/v202/diamant23a/diamant23a.pdf | https://openreview.net/forum?id=9WJsVG58YO | Improving Graph Generation by Restricting Graph Bandwidth | https://proceedings.mlr.press/v202/diamant23a.html | Nathaniel Lee Diamant, Alex M Tseng, Kangway V. Chuang, Tommaso Biancalani, Gabriele Scalia | https://proceedings.mlr.press/v202/diamant23a.html | ICML 2023 | Deep graph generative modeling has proven capable of learning the distribution of complex, multi-scale structures characterizing real-world graphs. However, one of the main limitations of existing methods is their large output space, which limits generation scalability and hinders accurate modeling of the underlying distribution. To overcome these limitations, we propose a novel approach that significantly reduces the output space of existing graph generative models. Specifically, starting from the observation that many real-world graphs have low graph bandwidth, we restrict graph bandwidth during training and generation. Our strategy improves both generation scalability and quality without increasing architectural complexity or reducing expressiveness. Our approach is compatible with existing graph generative methods, and we describe its application to both autoregressive and one-shot models. We extensively validate our strategy on synthetic and real datasets, including molecular graphs. Our experiments show that, in addition to improving generation efficiency, our approach consistently improves generation quality and reconstruction accuracy. The implementation is made available. |
https://proceedings.mlr.press/v202/diao23a.html | https://proceedings.mlr.press/v202/diao23a/diao23a.pdf | https://openreview.net/forum?id=9Kf9I2nqCh | Forward-Backward Gaussian Variational Inference via JKO in the Bures-Wasserstein Space | https://proceedings.mlr.press/v202/diao23a.html | Michael Ziyang Diao, Krishna Balasubramanian, Sinho Chewi, Adil Salim | https://proceedings.mlr.press/v202/diao23a.html | ICML 2023 | Variational inference (VI) seeks to approximate a target distribution $\pi$ by an element of a tractable family of distributions. Of key interest in statistics and machine learning is Gaussian VI, which approximates $\pi$ by minimizing the Kullback-Leibler (KL) divergence to $\pi$ over the space of Gaussians. In this work, we develop the (Stochastic) Forward-Backward Gaussian Variational Inference (FB-GVI) algorithm to solve Gaussian VI. Our approach exploits the composite structure of the KL divergence, which can be written as the sum of a smooth term (the potential) and a non-smooth term (the entropy) over the Bures-Wasserstein (BW) space of Gaussians endowed with the Wasserstein distance. For our proposed algorithm, we obtain state-of-the-art convergence guarantees when $\pi$ is log-smooth and log-concave, as well as the first convergence guarantees to first-order stationary solutions when $\pi$ is only log-smooth. |
https://proceedings.mlr.press/v202/dick23a.html | https://proceedings.mlr.press/v202/dick23a/dick23a.pdf | https://openreview.net/forum?id=nm4NwFfp7a | Subset-Based Instance Optimality in Private Estimation | https://proceedings.mlr.press/v202/dick23a.html | Travis Dick, Alex Kulesza, Ziteng Sun, Ananda Theertha Suresh | https://proceedings.mlr.press/v202/dick23a.html | ICML 2023 | We propose a new definition of instance optimality for differentially private estimation algorithms. Our definition requires an optimal algorithm to compete, simultaneously for every dataset $D$, with the best private benchmark algorithm that (a) knows $D$ in advance and (b) is evaluated by its worst-case performance on large subsets of $D$. That is, the benchmark algorithm need not perform well when potentially extreme points are added to $D$; it only has to handle the removal of a small number of real data points that already exist. This makes our benchmark significantly stronger than those proposed in prior work. We nevertheless show, for real-valued datasets, how to construct private algorithms that achieve our notion of instance optimality when estimating a broad class of dataset properties, including means, quantiles, and $\ell_p$-norm minimizers. For means in particular, we provide a detailed analysis and show that our algorithm simultaneously matches or exceeds the asymptotic performance of existing algorithms under a range of distributional assumptions. |
https://proceedings.mlr.press/v202/dimitriadis23a.html | https://proceedings.mlr.press/v202/dimitriadis23a/dimitriadis23a.pdf | https://openreview.net/forum?id=BUlx0rh7ha | Pareto Manifold Learning: Tackling multiple tasks via ensembles of single-task models | https://proceedings.mlr.press/v202/dimitriadis23a.html | Nikolaos Dimitriadis, Pascal Frossard, François Fleuret | https://proceedings.mlr.press/v202/dimitriadis23a.html | ICML 2023 | In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution, superior to all its single-task trained counterparts. Since there is often not a unique solution optimal for all tasks, practitioners have to balance tradeoffs between tasks’ performance, and resort to optimality in the Pareto sense. Most MTL methodologies either completely neglect this aspect, and instead of aiming at learning a Pareto Front, produce one solution predefined by their optimization schemes, or produce diverse but discrete solutions. Recent approaches parameterize the Pareto Front via neural networks, leading to complex mappings from tradeoff to objective space. In this paper, we conjecture that the Pareto Front admits a linear parameterization in parameter space, which leads us to propose Pareto Manifold Learning, an ensembling method in weight space. Our approach produces a continuous Pareto Front in a single training run, that allows to modulate the performance on each task during inference. Experiments on multi-task learning benchmarks, ranging from image classification to tabular datasets and scene understanding, show that Pareto Manifold Learning outperforms state-of-the-art single-point algorithms, while learning a better Pareto parameterization than multi-point baselines. |
https://proceedings.mlr.press/v202/ding23a.html | https://proceedings.mlr.press/v202/ding23a/ding23a.pdf | https://openreview.net/forum?id=JTTFrBX8KN | Bayesian Reparameterization of Reward-Conditioned Reinforcement Learning with Energy-based Models | https://proceedings.mlr.press/v202/ding23a.html | Wenhao Ding, Tong Che, Ding Zhao, Marco Pavone | https://proceedings.mlr.press/v202/ding23a.html | ICML 2023 | Recently, reward-conditioned reinforcement learning (RCRL) has gained popularity due to its simplicity, flexibility, and off-policy nature. However, we will show that current RCRL approaches are fundamentally limited and fail to address two critical challenges of RCRL – improving generalization on high reward-to-go (RTG) inputs, and avoiding out-of-distribution (OOD) RTG queries during testing time. To address these challenges when training vanilla RCRL architectures, we propose Bayesian Reparameterized RCRL (BR-RCRL), a novel set of inductive biases for RCRL inspired by Bayes’ theorem. BR-RCRL removes a core obstacle preventing vanilla RCRL from generalizing on high RTG inputs – a tendency that the model treats different RTG inputs as independent values, which we term “RTG Independence". BR-RCRL also allows us to design an accompanying adaptive inference method, which maximizes total returns while avoiding OOD queries that yield unpredictable behaviors in vanilla RCRL methods. We show that BR-RCRL achieves state-of-the-art performance on the Gym-Mujoco and Atari offline RL benchmarks, improving upon vanilla RCRL by up to 11%. |
https://proceedings.mlr.press/v202/ding23b.html | https://proceedings.mlr.press/v202/ding23b/ding23b.pdf | https://openreview.net/forum?id=nVO6YTca8O | DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus Algorithm | https://proceedings.mlr.press/v202/ding23b.html | Lisang Ding, Kexin Jin, Bicheng Ying, Kun Yuan, Wotao Yin | https://proceedings.mlr.press/v202/ding23b.html | ICML 2023 | Decentralized Stochastic Gradient Descent (SGD) is an emerging neural network training approach that enables multiple agents to train a model collaboratively and simultaneously. Rather than using a central parameter server to collect gradients from all the agents, each agent keeps a copy of the model parameters and communicates with a small number of other agents to exchange model updates. Their communication, governed by the communication topology and gossip weight matrices, facilitates the exchange of model updates. The state-of-the-art approach uses the dynamic one-peer exponential-2 topology, achieving faster training times and improved scalability than the ring, grid, torus, and hypercube topologies. However, this approach requires a power-of-2 number of agents, which is impractical at scale. In this paper, we remove this restriction and propose Decentralized SGD with Communication-optimal Exact Consensus Algorithm (DSGD-CECA), which works for any number of agents while still achieving state-of-the-art properties. In particular, DSGD-CECA incurs a unit per-iteration communication overhead and an $\tilde{O}(n^3)$ transient iteration complexity. Our proof is based on newly discovered properties of gossip weight matrices and a novel approach to combine them with DSGD’s convergence analysis. Numerical experiments show the efficiency of DSGD-CECA. |
https://proceedings.mlr.press/v202/ding23c.html | https://proceedings.mlr.press/v202/ding23c/ding23c.pdf | https://openreview.net/forum?id=Hsfchgv3WW | Open-Vocabulary Universal Image Segmentation with MaskCLIP | https://proceedings.mlr.press/v202/ding23c.html | Zheng Ding, Jieke Wang, Zhuowen Tu | https://proceedings.mlr.press/v202/ding23c.html | ICML 2023 | In this paper, we tackle an emerging computer vision task, open-vocabulary universal image segmentation, that aims to perform semantic/instance/panoptic segmentation (background semantic labeling + foreground instance segmentation) for arbitrary categories of text-based descriptions in inference time. We first build a baseline method by directly adopting pre-trained CLIP models without finetuning or distillation. We then develop MaskCLIP, a Transformer-based approach with a MaskCLIP Visual Encoder, which is an encoder-only module that seamlessly integrates mask tokens with a pre-trained ViT CLIP model for semantic/instance segmentation and class prediction. MaskCLIP learns to efficiently and effectively utilize pre-trained partial/dense CLIP features within the MaskCLIP Visual Encoder that avoids the time-consuming student-teacher training process. MaskCLIP outperforms previous methods for semantic/instance/panoptic segmentation on ADE20K and PASCAL datasets. We show qualitative illustrations for MaskCLIP with online custom categories. Project website: https://maskclip.github.io. |
https://proceedings.mlr.press/v202/ding23d.html | https://proceedings.mlr.press/v202/ding23d/ding23d.pdf | https://openreview.net/forum?id=fXBjFPL5HD | Entity Divider with Language Grounding in Multi-Agent Reinforcement Learning | https://proceedings.mlr.press/v202/ding23d.html | Ziluo Ding, Wanpeng Zhang, Junpeng Yue, Xiangjun Wang, Tiejun Huang, Zongqing Lu | https://proceedings.mlr.press/v202/ding23d.html | ICML 2023 | We investigate the use of natural language to drive the generalization of policies in multi-agent settings. Unlike single-agent settings, the generalization of policies should also consider the influence of other agents. Besides, with the increasing number of entities in multi-agent settings, more agent-entity interactions are needed for language grounding, and the enormous search space could impede the learning process. Moreover, given a simple general instruction, e.g., beating all enemies, agents are required to decompose it into multiple subgoals and figure out the right one to focus on. Inspired by previous work, we try to address these issues at the entity level and propose a novel framework for language grounding in multi-agent reinforcement learning, entity divider (EnDi). EnDi enables agents to independently learn subgoal division at the entity level and act in the environment based on the associated entities. The subgoal division is regularized by agent modeling to avoid subgoal conflicts and promote coordinated strategies. Empirically, EnDi demonstrates the strong generalization ability to unseen games with new dynamics and expresses the superiority over existing methods. The code is available at https://github.com/PKU-RL/EnDi. |
https://proceedings.mlr.press/v202/dinh23a.html | https://proceedings.mlr.press/v202/dinh23a/dinh23a.pdf | https://openreview.net/forum?id=2q1Whv1kXL | PixelAsParam: A Gradient View on Diffusion Sampling with Guidance | https://proceedings.mlr.press/v202/dinh23a.html | Anh-Dung Dinh, Daochang Liu, Chang Xu | https://proceedings.mlr.press/v202/dinh23a.html | ICML 2023 | Diffusion models recently achieved state-of-the-art in image generation. They mainly utilize the denoising framework, which leverages the Langevin dynamics process for image sampling. Recently, the guidance method has modified this process to add conditional information to achieve a controllable generator. However, the current guidance on denoising processes suffers from the trade-off between diversity, image quality, and conditional information. In this work, we propose to view this guidance sampling process from a gradient view, where image pixels are treated as parameters being optimized, and each mathematical term in the sampling process represents one update direction. This perspective reveals more insights into the conflict problems between updated directions on the pixels, which cause the trade-off as mentioned previously. We investigate the conflict problems and propose to solve them by a simple projection method. The experimental results evidently improve over different baselines on datasets with various resolutions. |
https://proceedings.mlr.press/v202/doikov23a.html | https://proceedings.mlr.press/v202/doikov23a/doikov23a.pdf | https://openreview.net/forum?id=Hk2fFm7W8c | Second-Order Optimization with Lazy Hessians | https://proceedings.mlr.press/v202/doikov23a.html | Nikita Doikov, El Mahdi Chayti, Martin Jaggi | https://proceedings.mlr.press/v202/doikov23a.html | ICML 2023 | We analyze Newton’s method with lazy Hessian updates for solving general possibly non-convex optimization problems. We propose to reuse a previously seen Hessian for several iterations while computing new gradients at each step of the method. This significantly reduces the overall arithmetic complexity of second-order optimization schemes. By using the cubic regularization technique, we establish fast global convergence of our method to a second-order stationary point, while the Hessian does not need to be updated each iteration. For convex problems, we justify global and local superlinear rates for lazy Newton steps with quadratic regularization, which is easier to compute. The optimal frequency for updating the Hessian is once every $d$ iterations, where $d$ is the dimension of the problem. This provably improves the total arithmetic complexity of second-order algorithms by a factor $\sqrt{d}$. |
https://proceedings.mlr.press/v202/doikov23b.html | https://proceedings.mlr.press/v202/doikov23b/doikov23b.pdf | https://openreview.net/forum?id=gJboa2IOua | Polynomial Preconditioning for Gradient Methods | https://proceedings.mlr.press/v202/doikov23b.html | Nikita Doikov, Anton Rodomanov | https://proceedings.mlr.press/v202/doikov23b.html | ICML 2023 | We study first-order methods with preconditioning for solving structured convex optimization problems. We propose a new family of preconditioners generated by the symmetric polynomials. They provide the first-order optimization methods with a provable improvement of the condition number, cutting the gaps between highest eigenvalues, without explicit knowledge of the actual spectrum. We give a stochastic interpretation of this preconditioning in terms of the coordinate volume sampling and compare it with other classical approaches, including the Chebyshev polynomials. We show how to incorporate a polynomial preconditioning into the Gradient and Fast Gradient Methods and establish their better global complexity bounds. Finally, we propose a simple adaptive search procedure that automatically ensures the best polynomial preconditioning for the Gradient Method, minimizing the objective along a low-dimensional Krylov subspace. Numerical experiments confirm the efficiency of our preconditioning strategies for solving various machine learning problems. |
https://proceedings.mlr.press/v202/dominguez-olmedo23a.html | https://proceedings.mlr.press/v202/dominguez-olmedo23a/dominguez-olmedo23a.pdf | https://openreview.net/forum?id=XAGr6u76Lu | On Data Manifolds Entailed by Structural Causal Models | https://proceedings.mlr.press/v202/dominguez-olmedo23a.html | Ricardo Dominguez-Olmedo, Amir-Hossein Karimi, Georgios Arvanitidis, Bernhard Schölkopf | https://proceedings.mlr.press/v202/dominguez-olmedo23a.html | ICML 2023 | The geometric structure of data is an important inductive bias in machine learning. In this work, we characterize the data manifolds entailed by structural causal models. The strengths of the proposed framework are twofold: firstly, the geometric structure of the data manifolds is causally informed, and secondly, it enables causal reasoning about the data manifolds in an interventional and a counterfactual sense. We showcase the versatility of the proposed framework by applying it to the generation of causally-grounded counterfactual explanations for machine learning classifiers, measuring distances along the data manifold in a differential geometric-principled manner. |
https://proceedings.mlr.press/v202/dong23a.html | https://proceedings.mlr.press/v202/dong23a/dong23a.pdf | https://openreview.net/forum?id=wO6ExWRO3c | Towards Understanding and Reducing Graph Structural Noise for GNNs | https://proceedings.mlr.press/v202/dong23a.html | Mingze Dong, Yuval Kluger | https://proceedings.mlr.press/v202/dong23a.html | ICML 2023 | Graph neural networks (GNNs) have emerged as a powerful paradigm to learn from relational data mostly through applying the message passing mechanism. However, this approach may exhibit suboptimal performance when applied to graphs possessing various structural issues. In this work, we focus on understanding and alleviating the effect of graph structural noise on GNN performance. To evaluate the graph structural noise in real data, we propose edge signal-to-noise ratio (ESNR), a novel metric evaluating overall edge noise level with respect to data features or labels based on random matrix theory. We have found striking concordance between the proposed ESNR metric and the GNN performance in various simulated and real data. To reduce the effect of the noise, we propose GPS (Graph Propensity Score) graph rewiring, which estimates the edge likelihood for rewiring data graphs based on self-supervised link prediction. We provide a theoretical guarantee for GPS graph rewiring and demonstrate its efficacy by comprehensive benchmarks. |
https://proceedings.mlr.press/v202/dong23b.html | https://proceedings.mlr.press/v202/dong23b/dong23b.pdf | https://openreview.net/forum?id=5VdcSxrlTK | SpeedDETR: Speed-aware Transformers for End-to-end Object Detection | https://proceedings.mlr.press/v202/dong23b.html | Peiyan Dong, Zhenglun Kong, Xin Meng, Peng Zhang, Hao Tang, Yanzhi Wang, Chih-Hsien Chou | https://proceedings.mlr.press/v202/dong23b.html | ICML 2023 | Vision Transformers (ViTs) have continuously achieved new milestones in object detection. However, the considerable computation and memory burden compromise their efficiency and generalization of deployment on resource-constraint devices. Besides, efficient transformer-based detectors designed by existing works can hardly achieve a realistic speedup, especially on multi-core processors (e.g., GPUs). The main issue is that the current literature solely concentrates on building algorithms with minimal computation, oblivious that the practical latency can also be affected by the memory access cost and the degree of parallelism. Therefore, we propose SpeedDETR, a novel speed-aware transformer for end-to-end object detectors, achieving high-speed inference on multiple devices. Specifically, we design a latency prediction model which can directly and accurately estimate the network latency by analyzing network properties, hardware memory access pattern, and degree of parallelism. Following the effective local-to-global visual modeling process and the guidance of the latency prediction model, we build our hardware-oriented architecture design and develop a new family of SpeedDETR. Experiments on the MS COCO dataset show SpeedDETR outperforms current DETR-based methods on Tesla V100. Even acceptable speed inference can be achieved on edge GPUs. |
https://proceedings.mlr.press/v202/dong23c.html | https://proceedings.mlr.press/v202/dong23c/dong23c.pdf | https://openreview.net/forum?id=ikE60aXe8M | Understand and Modularize Generator Optimization in ELECTRA-style Pretraining | https://proceedings.mlr.press/v202/dong23c.html | Chengyu Dong, Liyuan Liu, Hao Cheng, Jingbo Shang, Jianfeng Gao, Xiaodong Liu | https://proceedings.mlr.press/v202/dong23c.html | ICML 2023 | Despite the effectiveness of ELECTRA-style pre-training, their performance is dependent on the careful selection of the model size for the auxiliary generator, leading to high trial-and-error costs. In this paper, we present the first systematic study of this problem. Our theoretical investigation highlights the importance of controlling the generator capacity in ELECTRA-style training. Meanwhile, we found it is not handled properly in the original ELECTRA design, leading to the sensitivity issue. Specifically, since adaptive optimizers like Adam will cripple the weighing of individual losses in the joint optimization, the original design fails to control the generator training effectively. To regain control over the generator, we modularize the generator optimization by decoupling the generator optimizer and discriminator optimizer completely, instead of simply relying on the weighted objective combination. Our simple technique reduced the sensitivity of ELECTRA training significantly and obtains considerable performance gain compared to the original design. |
https://proceedings.mlr.press/v202/dong23d.html | https://proceedings.mlr.press/v202/dong23d/dong23d.pdf | https://openreview.net/forum?id=PJzjHAnoVp | Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation | https://proceedings.mlr.press/v202/dong23d.html | Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han | https://proceedings.mlr.press/v202/dong23d.html | ICML 2023 | Generating unlabeled data has been recently shown to help address the few-shot hypothesis adaptation (FHA) problem, where we aim to train a classifier for the target domain with a few labeled target-domain data and a well-trained source-domain classifier (i.e., a source hypothesis), for the additional information of the highly-compatible unlabeled data. However, the generated data of the existing methods are extremely similar or even the same. The strong dependency among the generated data will lead the learning to fail. In this paper, we propose a diversity-enhancing generative network (DEG-Net) for the FHA problem, which can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC). Specifically, DEG-Net will generate data via minimizing the HSIC value (i.e., maximizing the independence) among the semantic features of the generated data. By DEG-Net, the generated unlabeled data are more diverse and more effective for addressing the FHA problem. Experimental results show that the DEG-Net outperforms existing FHA baselines and further verifies that generating diverse data plays an important role in addressing the FHA problem. |
https://proceedings.mlr.press/v202/dong23e.html | https://proceedings.mlr.press/v202/dong23e/dong23e.pdf | https://openreview.net/forum?id=Yzfg7JhPhp | PASTA: Pessimistic Assortment Optimization | https://proceedings.mlr.press/v202/dong23e.html | Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X Fang, Vahid Tarokh | https://proceedings.mlr.press/v202/dong23e.html | ICML 2023 | We consider a fundamental class of assortment optimization problems in an offline data-driven setting. The firm does not know the underlying customer choice model but has access to an offline dataset consisting of the historically offered assortment set, customer choice, and revenue. The objective is to use the offline dataset to find an optimal assortment. Due to the combinatorial nature of assortment optimization, the problem of insufficient data coverage is likely to occur in the offline dataset. Therefore, designing a provably efficient offline learning algorithm becomes a significant challenge. To this end, based on the principle of pessimism, we propose a novel algorithm called Pessimistic ASsortment opTimizAtion (PASTA for short), which can correctly identify the optimal assortment by only requiring the offline data to cover the optimal assortment under general settings. In particular, we establish the first regret bound for the offline assortment optimization problem under the celebrated multinomial logit model (MNL). We also propose an efficient computational procedure to solve our pessimistic assortment optimization problem. Our numerical studies demonstrate the superiority of the proposed method over the existing baseline method. |
https://proceedings.mlr.press/v202/dong23f.html | https://proceedings.mlr.press/v202/dong23f/dong23f.pdf | https://openreview.net/forum?id=5KX2NKgFjD | Adaptively Weighted Data Augmentation Consistency Regularization for Robust Optimization under Concept Shift | https://proceedings.mlr.press/v202/dong23f.html | Yijun Dong, Yuege Xie, Rachel Ward | https://proceedings.mlr.press/v202/dong23f.html | ICML 2023 | Concept shift is a prevailing problem in natural tasks like medical image segmentation where samples usually come from different subpopulations with variant correlations between features and labels. One common type of concept shift in medical image segmentation is the "information imbalance" between label-sparse samples with few (if any) segmentation labels and label-dense samples with plentiful labeled pixels. Existing distributionally robust algorithms have focused on adaptively truncating/down-weighting the "less informative" (i.e., label-sparse in our context) samples. To exploit data features of label-sparse samples more efficiently, we propose an adaptively weighted online optimization algorithm — AdaWAC — to incorporate data augmentation consistency regularization in sample reweighting. Our method introduces a set of trainable weights to balance the supervised loss and unsupervised consistency regularization of each sample separately. At the saddle point of the underlying objective, the weights assign label-dense samples to the supervised loss and label-sparse samples to the unsupervised consistency regularization. We provide a convergence guarantee by recasting the optimization as online mirror descent on a saddle point problem. Our empirical results demonstrate that AdaWAC not only enhances the segmentation performance and sample efficiency but also improves the robustness to concept shift on various medical image segmentation tasks with different UNet-style backbones. |
https://proceedings.mlr.press/v202/dong23g.html | https://proceedings.mlr.press/v202/dong23g/dong23g.pdf | https://openreview.net/forum?id=T6kFiVUOn2 | Does Sparsity Help in Learning Misspecified Linear Bandits? | https://proceedings.mlr.press/v202/dong23g.html | Jialin Dong, Lin Yang | https://proceedings.mlr.press/v202/dong23g.html | ICML 2023 | Recently, the study of linear misspecified bandits has generated intriguing implications of the hardness of learning in bandits and reinforcement learning (RL). In particular, Du et al. (2020) shows that even if a learner is given linear features in $\mathbb{R}^d$ that approximate the rewards in a bandit or RL with a uniform error of $\varepsilon$, searching for an $O(\varepsilon)$-optimal action requires pulling at least $\Omega(\exp(d))$ queries. Furthermore, Lattimore et al. (2020) show that a degraded $O(\varepsilon\sqrt{d})$-optimal solution can be learned within $\operatorname{poly}(d/\varepsilon)$ queries. Yet it is unknown whether a structural assumption on the ground-truth parameter, such as sparsity, could break $\varepsilon\sqrt{d}$ barrier. In this paper, we address this question by showing that algorithms can obtain $O(\varepsilon)$-optimal actions by querying $\tilde{O}(\exp(m\varepsilon))$ actions, where $m$ is the sparsity parameter, removing the $\exp(d)$-dependence. We further show (with an information-theoretical lower bound) that this is the best possible if one demands an error $ m^{\delta}\varepsilon$ for $0<\delta<1$. We further show that $\operatorname{poly}(m/\varepsilon)$ bounds are possible when the linear features are "good”. These results provide a nearly complete picture of how sparsity can help in misspecified bandit learning and provide a deeper understanding of when linear features are “useful” for bandit and reinforcement learning with misspecification. |
https://proceedings.mlr.press/v202/dong23h.html | https://proceedings.mlr.press/v202/dong23h/dong23h.pdf | https://openreview.net/forum?id=jeHP6aBCBu | Symmetry-Aware Robot Design with Structured Subgroups | https://proceedings.mlr.press/v202/dong23h.html | Heng Dong, Junyu Zhang, Tonghan Wang, Chongjie Zhang | https://proceedings.mlr.press/v202/dong23h.html | ICML 2023 | Robot design aims at learning to create robots that can be easily controlled and perform tasks efficiently. Previous works on robot design have proven its ability to generate robots for various tasks. However, these works searched the robots directly from the vast design space and ignored common structures, resulting in abnormal robots and poor performance. To tackle this problem, we propose a Symmetry-Aware Robot Design (SARD) framework that exploits the structure of the design space by incorporating symmetry searching into the robot design process. Specifically, we represent symmetries with the subgroups of the dihedral group and search for the optimal symmetry in structured subgroups. Then robots are designed under the searched symmetry. In this way, SARD can design efficient symmetric robots while covering the original design space, which is theoretically analyzed. We further empirically evaluate SARD on various tasks, and the results show its superior efficiency and generalizability. |
https://proceedings.mlr.press/v202/dorfman23a.html | https://proceedings.mlr.press/v202/dorfman23a/dorfman23a.pdf | https://openreview.net/forum?id=VxKr51JjWC | DoCoFL: Downlink Compression for Cross-Device Federated Learning | https://proceedings.mlr.press/v202/dorfman23a.html | Ron Dorfman, Shay Vargaftik, Yaniv Ben-Itzhak, Kfir Yehuda Levy | https://proceedings.mlr.press/v202/dorfman23a.html | ICML 2023 | Many compression techniques have been proposed to reduce the communication overhead of Federated Learning training procedures. However, these are typically designed for compressing model updates, which are expected to decay throughout training. As a result, such methods are inapplicable to downlink (i.e., from the parameter server to clients) compression in the cross-device setting, where heterogeneous clients may appear only once during training and thus must download the model parameters. Accordingly, we propose DoCoFL – a new framework for downlink compression in the cross-device setting. Importantly, DoCoFL can be seamlessly combined with many uplink compression schemes, rendering it suitable for bi-directional compression. Through extensive evaluation, we show that DoCoFL offers significant bi-directional bandwidth reduction while achieving competitive accuracy to that of a baseline without any compression. |
https://proceedings.mlr.press/v202/dorrell23a.html | https://proceedings.mlr.press/v202/dorrell23a/dorrell23a.pdf | https://openreview.net/forum?id=757L5dtuah | Meta-Learning the Inductive Bias of Simple Neural Circuits | https://proceedings.mlr.press/v202/dorrell23a.html | Will Dorrell, Maria Yuffa, Peter E. Latham | https://proceedings.mlr.press/v202/dorrell23a.html | ICML 2023 | Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam’s razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals’ neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating our intended use: understanding the role of circuit features through the resulting inductive bias. |
https://proceedings.mlr.press/v202/doshi23a.html | https://proceedings.mlr.press/v202/doshi23a/doshi23a.pdf | https://openreview.net/forum?id=450iImFM4U | Self-Repellent Random Walks on General Graphs - Achieving Minimal Sampling Variance via Nonlinear Markov Chains | https://proceedings.mlr.press/v202/doshi23a.html | Vishwaraj Doshi, Jie Hu, Do Young Eun | https://proceedings.mlr.press/v202/doshi23a.html | ICML 2023 | We consider random walks on discrete state spaces, such as general undirected graphs, where the random walkers are designed to approximate a target quantity over the network topology via sampling and neighborhood exploration in the form of Markov chain Monte Carlo (MCMC) procedures. Given any Markov chain corresponding to a target probability distribution, we design a self-repellent random walk (SRRW) which is less likely to transition to nodes that were highly visited in the past, and more likely to transition to seldom visited nodes. For a class of SRRWs parameterized by a positive real $\alpha$, we prove that the empirical distribution of the process converges almost surely to the the target (stationary) distribution of the underlying Markov chain kernel. We then provide a central limit theorem and derive the exact form of the arising asymptotic co-variance matrix, which allows us to show that the SRRW with a stronger repellence (larger $\alpha$) always achieves a smaller asymptotic covariance, in the sense of Loewner ordering of co-variance matrices. Especially for SRRW-driven MCMC algorithms, we show that the decrease in the asymptotic sampling variance is of the order $O(1/\alpha)$, eventually going down to zero. Finally, we provide numerical simulations complimentary to our theoretical results, also empirically testing a version of SRRW with $\alpha$ increasing in time to combine the benefits of smaller asymptotic variance due to large $\alpha$, with empirically observed faster mixing properties of SRRW with smaller $\alpha$. |
https://proceedings.mlr.press/v202/dowling23a.html | https://proceedings.mlr.press/v202/dowling23a/dowling23a.pdf | https://openreview.net/forum?id=1hWB5XEUMa | Linear Time GPs for Inferring Latent Trajectories from Neural Spike Trains | https://proceedings.mlr.press/v202/dowling23a.html | Matthew Dowling, Yuan Zhao, Il Memming Park | https://proceedings.mlr.press/v202/dowling23a.html | ICML 2023 | Latent Gaussian process (GP) models are widely used in neuroscience to uncover hidden state evolutions from sequential observations, mainly in neural activity recordings. While latent GP models provide a principled and powerful solution in theory, the intractable posterior in non-conjugate settings necessitates approximate inference schemes, which may lack scalability. In this work, we propose cvHM, a general inference framework for latent GP models leveraging Hida-Matérn kernels and conjugate computation variational inference (CVI). With cvHM, we are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods. The reparameterization of stationary kernels using Hida-Matérn GPs helps us connect the latent variable models that encode prior assumptions through dynamical systems to those that encode trajectory assumptions through GPs. In contrast to previous work, we use bidirectional information filtering, leading to a more concise implementation. Furthermore, we employ the Whittle approximate likelihood to achieve highly efficient hyperparameter learning. |
https://proceedings.mlr.press/v202/draxler23a.html | https://proceedings.mlr.press/v202/draxler23a/draxler23a.pdf | https://openreview.net/forum?id=xqYFvRanEW | On the Convergence Rate of Gaussianization with Random Rotations | https://proceedings.mlr.press/v202/draxler23a.html | Felix Draxler, Lars Kühmichel, Armand Rousselot, Jens Müller, Christoph Schnoerr, Ullrich Koethe | https://proceedings.mlr.press/v202/draxler23a.html | ICML 2023 | Gaussianization is a simple generative model that can be trained without backpropagation. It has shown compelling performance on low dimensional data. As the dimension increases, however, it has been observed that the convergence speed slows down. We show analytically that the number of required layers scales linearly with the dimension for Gaussian input. We argue that this is because the model is unable to capture dependencies between dimensions. Empirically, we find the same linear increase in cost for arbitrary input $p(x)$, but observe favorable scaling for some distributions. We explore potential speed-ups and formulate challenges for further research. |
https://proceedings.mlr.press/v202/driess23a.html | https://proceedings.mlr.press/v202/driess23a/driess23a.pdf | https://openreview.net/forum?id=VTpHpqM3Cf | PaLM-E: An Embodied Multimodal Language Model | https://proceedings.mlr.press/v202/driess23a.html | Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence | https://proceedings.mlr.press/v202/driess23a.html | ICML 2023 | Large language models excel at a wide range of complex tasks. However, enabling general inference in the real world, e.g. for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language model are multimodal sentences that interleave visual, continuous state estimation, and textual input encodings. We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks including sequential robotic manipulation planning, visual question answering, and captioning. Our evaluations show that PaLM-E, a single large embodied multimodal model, can address a variety of embodied reasoning tasks, from a variety of observation modalities, on multiple embodiments, and further, exhibits positive transfer: the model benefits from diverse joint training across internet-scale language, vision, and visual-language domains. Our largest model with 562B parameters, in addition to being trained on robotics tasks, is a visual-language generalist with state-of-the-art performance on OK-VQA, and retains generalist language capabilities with increasing scale. |
https://proceedings.mlr.press/v202/du23a.html | https://proceedings.mlr.press/v202/du23a/du23a.pdf | https://openreview.net/forum?id=lAXwXjSYum | Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC | https://proceedings.mlr.press/v202/du23a.html | Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Sussman Grathwohl | https://proceedings.mlr.press/v202/du23a.html | ICML 2023 | Since their introduction, diffusion models have quickly become the prevailing approach to generative modeling in many domains. They can be interpreted as learning the gradients of a time-varying sequence of log-probability density functions. This interpretation has motivated classifier-based and classifier-free guidance as methods for post-hoc control of diffusion models. In this work, we build upon these ideas using the score-based interpretation of diffusion models, and explore alternative ways to condition, modify, and reuse diffusion models for tasks involving compositional generation and guidance. In particular, we investigate why certain types of composition fail using current techniques and present a number of solutions. We conclude that the sampler (not the model) is responsible for this failure and propose new samplers, inspired by MCMC, which enable successful compositional generation. Further, we propose an energy-based parameterization of diffusion models which enables the use of new compositional operators and more sophisticated, Metropolis-corrected samplers. Intriguingly we find these samplers lead to notable improvements in compositional generation across a wide variety of problems such as classifier-guided ImageNet modeling and compositional text-to-image generation. |
https://proceedings.mlr.press/v202/du23b.html | https://proceedings.mlr.press/v202/du23b/du23b.pdf | https://openreview.net/forum?id=EqAsFB28T0 | Multi-task Representation Learning for Pure Exploration in Linear Bandits | https://proceedings.mlr.press/v202/du23b.html | Yihan Du, Longbo Huang, Wen Sun | https://proceedings.mlr.press/v202/du23b.html | ICML 2023 | Despite the recent success of representation learning in sequential decision making, the study of the pure exploration scenario (i.e., identify the best option and minimize the sample complexity) is still limited. In this paper, we study multi-task representation learning for best arm identification in linear bandit (RepBAI-LB) and best policy identification in contextual linear bandit (RepBPI-CLB), two popular pure exploration settings with wide applications, e.g., clinical trials and web content optimization. In these two problems, all tasks share a common low-dimensional linear representation, and our goal is to leverage this feature to accelerate the best arm (policy) identification process for all tasks. For these problems, we design computationally and sample efficient algorithms DouExpDes and C-DouExpDes, which perform double experimental designs to plan optimal sample allocations for learning the global representation. We show that by learning the common representation among tasks, our sample complexity is significantly better than that of the native approach which solves tasks independently. To the best of our knowledge, this is the first work to demonstrate the benefits of representation learning for multi-task pure exploration. |
https://proceedings.mlr.press/v202/du23c.html | https://proceedings.mlr.press/v202/du23c/du23c.pdf | https://openreview.net/forum?id=cOngPVufCF | Nonparametric Generative Modeling with Conditional Sliced-Wasserstein Flows | https://proceedings.mlr.press/v202/du23c.html | Chao Du, Tianbo Li, Tianyu Pang, Shuicheng Yan, Min Lin | https://proceedings.mlr.press/v202/du23c.html | ICML 2023 | Sliced-Wasserstein Flow (SWF) is a promising approach to nonparametric generative modeling but has not been widely adopted due to its suboptimal generative quality and lack of conditional modeling capabilities. In this work, we make two major contributions to bridging this gap. First, based on a pleasant observation that (under certain conditions) the SWF of joint distributions coincides with those of conditional distributions, we propose Conditional Sliced-Wasserstein Flow (CSWF), a simple yet effective extension of SWF that enables nonparametric conditional modeling. Second, we introduce appropriate inductive biases of images into SWF with two techniques inspired by local connectivity and multiscale representation in vision research, which greatly improve the efficiency and quality of modeling images. With all the improvements, we achieve generative performance comparable with many deep parametric generative models on both conditional and unconditional tasks in a purely nonparametric fashion, demonstrating its great potential. |
https://proceedings.mlr.press/v202/du23d.html | https://proceedings.mlr.press/v202/du23d/du23d.pdf | https://openreview.net/forum?id=s7me1XxUqd | Subsample Ridge Ensembles: Equivalences and Generalized Cross-Validation | https://proceedings.mlr.press/v202/du23d.html | Jin-Hong Du, Pratik Patil, Arun K. Kuchibhotla | https://proceedings.mlr.press/v202/du23d.html | ICML 2023 | We study subsampling-based ridge ensembles in the proportional asymptotics regime, where the feature size grows proportionally with the sample size such that their ratio converges to a constant. By analyzing the squared prediction risk of ridge ensembles as a function of the explicit penalty $\lambda$ and the limiting subsample aspect ratio $\phi_s$ (the ratio of the feature size to the subsample size), we characterize contours in the $(\lambda, \phi_s)$-plane at any achievable risk. As a consequence, we prove that the risk of the optimal full ridgeless ensemble (fitted on all possible subsamples) matches that of the optimal ridge predictor. In addition, we prove strong uniform consistency of generalized cross-validation (GCV) over the subsample sizes for estimating the prediction risk of ridge ensembles. This allows for GCV-based tuning of full ridgeless ensembles without sample splitting and yields a predictor whose risk matches optimal ridge risk. |
https://proceedings.mlr.press/v202/du23e.html | https://proceedings.mlr.press/v202/du23e/du23e.pdf | https://openreview.net/forum?id=HNbwCYFOZM | On Uni-Modal Feature Learning in Supervised Multi-Modal Learning | https://proceedings.mlr.press/v202/du23e.html | Chenzhuang Du, Jiaye Teng, Tingle Li, Yichen Liu, Tianyuan Yuan, Yue Wang, Yang Yuan, Hang Zhao | https://proceedings.mlr.press/v202/du23e.html | ICML 2023 | We abstract the features (i.e. learned representations) of multi-modal data into 1) uni-modal features, which can be learned from uni-modal training, and 2) paired features, which can only be learned from cross-modal interactions. Multi-modal models are expected to benefit from cross-modal interactions on the basis of ensuring uni-modal feature learning. However, recent supervised multi-modal late-fusion training approaches still suffer from insufficient learning of uni-modal features on each modality. We prove that this phenomenon does hurt the model’s generalization ability. To this end, we propose to choose a targeted late-fusion learning method for the given supervised multi-modal task from Uni-Modal Ensemble (UME) and the proposed Uni-Modal Teacher (UMT), according to the distribution of uni-modal and paired features. We demonstrate that, under a simple guiding strategy, we can achieve comparable results to other complex late-fusion or intermediate-fusion methods on various multi-modal datasets, including VGG-Sound, Kinetics-400, UCF101, and ModelNet40. |
https://proceedings.mlr.press/v202/du23f.html | https://proceedings.mlr.press/v202/du23f/du23f.pdf | https://openreview.net/forum?id=63704LH4v5 | Guiding Pretraining in Reinforcement Learning with Large Language Models | https://proceedings.mlr.press/v202/du23f.html | Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, Jacob Andreas | https://proceedings.mlr.press/v202/du23f.html | ICML 2023 | Reinforcement learning algorithms typically struggle in the absence of a dense, well-shaped reward function. Intrinsically motivated exploration methods address this limitation by rewarding agents for visiting novel states or transitions, but these methods offer limited benefits in large environments where most discovered novelty is irrelevant for downstream tasks. We describe a method that uses background knowledge from text corpora to shape exploration. This method, called ELLM (Exploring with LLMs) rewards an agent for achieving goals suggested by a language model prompted with a description of the agent’s current state. By leveraging large-scale language model pretraining, ELLM guides agents toward human-meaningful and plausibly useful behaviors without requiring a human in the loop. We evaluate ELLM in the Crafter game environment and the Housekeep robotic simulator, showing that ELLM-trained agents have better coverage of common-sense behaviors during pretraining and usually match or improve performance on a range of downstream tasks. |
https://proceedings.mlr.press/v202/du23g.html | https://proceedings.mlr.press/v202/du23g/du23g.pdf | https://openreview.net/forum?id=NwICIHHpKf | A Flexible Diffusion Model | https://proceedings.mlr.press/v202/du23g.html | Weitao Du, He Zhang, Tao Yang, Yuanqi Du | https://proceedings.mlr.press/v202/du23g.html | ICML 2023 | Denoising diffusion (score-based) generative models have become a popular choice for modeling complex data. Recently, a deep connection between forward-backward stochastic differential equations (SDEs) and diffusion-based models has been established, leading to the development of new SDE variants such as sub-VP and critically-damped Langevin. Despite the empirical success of some hand-crafted forward SDEs, many potentially promising forward SDEs remain unexplored. In this work, we propose a general framework for parameterizing diffusion models, particularly the spatial part of forward SDEs, by leveraging the symplectic and Riemannian geometry of the data manifold. We introduce a systematic formalism with theoretical guarantees and connect it with previous diffusion models. Finally, we demonstrate the theoretical advantages of our method from a variational optimization perspective. We present numerical experiments on synthetic datasets, MNIST and CIFAR10 to validate the effectiveness of our framework. |
https://proceedings.mlr.press/v202/duan23a.html | https://proceedings.mlr.press/v202/duan23a/duan23a.pdf | https://openreview.net/forum?id=RGiPlFCQeK | Fast Excess Risk Rates via Offset Rademacher Complexity | https://proceedings.mlr.press/v202/duan23a.html | Chenguang Duan, Yuling Jiao, Lican Kang, Xiliang Lu, Jerry Zhijian Yang | https://proceedings.mlr.press/v202/duan23a.html | ICML 2023 | Based on the offset Rademacher complexity, this work outlines a systematical framework for deriving sharp excess risk bounds in statistical learning without Bernstein condition. In addition to recovering fast rates in a unified way for some parametric and nonparametric supervised learning models with minimum identifiability assumptions, we also obtain new and improved results for LAD (sparse) linear regression and deep logistic regression with deep ReLU neural networks, respectively. |
https://proceedings.mlr.press/v202/duan23b.html | https://proceedings.mlr.press/v202/duan23b/duan23b.pdf | https://openreview.net/forum?id=ZMDv1Mo89E | Are Diffusion Models Vulnerable to Membership Inference Attacks? | https://proceedings.mlr.press/v202/duan23b.html | Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu | https://proceedings.mlr.press/v202/duan23b.html | ICML 2023 | Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern. Our results indicate that existing MIAs designed for GANs or VAE are largely ineffective on diffusion models, either due to inapplicable scenarios (e.g., requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer distances between synthetic samples and member samples). To address this gap, we propose Step-wise Error Comparing Membership Inference (SecMI), a query-based MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep. SecMI follows the common overfitting assumption in MIA where member samples normally have smaller estimation errors, compared with hold-out samples. We consider both the standard diffusion models, e.g., DDPM, and the text-to-image diffusion models, e.g., Latent Diffusion Models and Stable Diffusion. Experimental results demonstrate that our methods precisely infer the membership with high confidence on both of the two scenarios across multiple different datasets. Code is available at https://github.com/jinhaoduan/SecMI. |
https://proceedings.mlr.press/v202/duan23c.html | https://proceedings.mlr.press/v202/duan23c/duan23c.pdf | https://openreview.net/forum?id=UTKOQSp41y | Bayesian Progressive Deep Topic Model with Knowledge Informed Textual Data Coarsening Process | https://proceedings.mlr.press/v202/duan23c.html | Zhibin Duan, Xinyang Liu, Yudi Su, Yishi Xu, Bo Chen, Mingyuan Zhou | https://proceedings.mlr.press/v202/duan23c.html | ICML 2023 | Deep topic models have shown an impressive ability to extract multi-layer document latent representations and discover hierarchical semantically meaningful topics.However, most deep topic models are limited to the single-step generative process, despite the fact that the progressive generative process has achieved impressive performance in modeling image data. To this end, in this paper, we propose a novel progressive deep topic model that consists of a knowledge-informed textural data coarsening process and a corresponding progressive generative model. The former is used to build multi-level observations ranging from concrete to abstract, while the latter is used to generate more concrete observations gradually. Additionally, we incorporate a graph-enhanced decoder to capture the semantic relationships among words at different levels of observation. Furthermore, we perform a theoretical analysis of the proposed model based on the principle of information theory and show how it can alleviate the well-known "latent variable collapse" problem. Finally, extensive experiments demonstrate that our proposed model effectively improves the ability of deep topic models, resulting in higher-quality latent document representations and topics. |
https://proceedings.mlr.press/v202/duan23d.html | https://proceedings.mlr.press/v202/duan23d/duan23d.pdf | https://openreview.net/forum?id=MLrquPs6OI | Are Equivariant Equilibrium Approximators Beneficial? | https://proceedings.mlr.press/v202/duan23d.html | Zhijian Duan, Yunxuan Ma, Xiaotie Deng | https://proceedings.mlr.press/v202/duan23d.html | ICML 2023 | Recently, remarkable progress has been made by approximating Nash equilibrium (NE), correlated equilibrium (CE), and coarse correlated equilibrium (CCE) through function approximation that trains a neural network to predict equilibria from game representations. Furthermore, equivariant architectures are widely adopted in designing such equilibrium approximators in normal-form games. In this paper, we theoretically characterize the benefits and limitations of equivariant equilibrium approximators. For the benefits, we show that they enjoy better generalizability than general ones and can achieve better approximations when the payoff distribution is permutation-invariant. For the limitations, we discuss their drawbacks in terms of equilibrium selection and social welfare. Together, our results help to understand the role of equivariance in equilibrium approximators. |
https://proceedings.mlr.press/v202/dubois23a.html | https://proceedings.mlr.press/v202/dubois23a/dubois23a.pdf | https://openreview.net/forum?id=dEjB1SLDnt | Evaluating Self-Supervised Learning via Risk Decomposition | https://proceedings.mlr.press/v202/dubois23a.html | Yann Dubois, Tatsunori Hashimoto, Percy Liang | https://proceedings.mlr.press/v202/dubois23a.html | ICML 2023 | Self-supervised learning (SSL) is typically evaluated using a single metric (linear probing on ImageNet), which neither provides insight into tradeoffs between models nor highlights how to improve them. To address this, we propose an SSL risk decomposition, which generalizes the classical approximation-estimation decomposition. Our decomposition consists of four error terms: approximation, representation usability, probe generalization, and encoder generalization. We provide efficient estimators for each term and use them to analyze the effect of 30 design choices on 169 SSL vision models evaluated on ImageNet. Our analysis gives valuable insights for designing and using SSL models. For example, it highlights the main source of errors and shows how to improve SSL in specific settings (full- vs few-shot) by trading off error components. |
https://proceedings.mlr.press/v202/duetting23a.html | https://proceedings.mlr.press/v202/duetting23a/duetting23a.pdf | https://openreview.net/forum?id=Bj76bauv1Q | Fully Dynamic Submodular Maximization over Matroids | https://proceedings.mlr.press/v202/duetting23a.html | Paul Duetting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, Morteza Zadimoghaddam | https://proceedings.mlr.press/v202/duetting23a.html | ICML 2023 | Maximizing monotone submodular functions under a matroid constraint is a classic algorithmic problem with multiple applications in data mining and machine learning. We study this classic problem in the fully dynamic setting, where elements can be both inserted and deleted in real-time. Our main result is a randomized algorithm that maintains an efficient data structure with an $\tilde{O}(k^2)$ amortized update time (in the number of additions and deletions) and yields a $4$-approximate solution, where $k$ is the rank of the matroid. |
https://proceedings.mlr.press/v202/duetting23b.html | https://proceedings.mlr.press/v202/duetting23b/duetting23b.pdf | https://openreview.net/forum?id=yv8GUQREda | Optimal No-Regret Learning for One-Sided Lipschitz Functions | https://proceedings.mlr.press/v202/duetting23b.html | Paul Duetting, Guru Guruganesh, Jon Schneider, Joshua Ruizhi Wang | https://proceedings.mlr.press/v202/duetting23b.html | ICML 2023 | Inspired by applications in pricing and contract design, we study the maximization of one-sided Lipschitz functions, which only provide the (weaker) guarantee that they do not grow too quickly in one direction. We show that it is possible to learn a maximizer for such a function while incurring $O(\log \log T)$ total regret (with a universal constant independent of the number of discontinuities / complexity of the function). This regret bound is asymptotically optimal in $T$ due to a lower bound of Kleinberg and Leighton. By applying this algorithm, we show that one can sell digital goods to multiple buyers and learn the optimal linear contract in the principal-agent setting while incurring at most $O(\log \log T)$ regret. |
https://proceedings.mlr.press/v202/dufumier23a.html | https://proceedings.mlr.press/v202/dufumier23a/dufumier23a.pdf | https://openreview.net/forum?id=Y8lH4BemX8 | Integrating Prior Knowledge in Contrastive Learning with Kernel | https://proceedings.mlr.press/v202/dufumier23a.html | Benoit Dufumier, Carlo Alberto Barbano, Robin Louiset, Edouard Duchesnay, Pietro Gori | https://proceedings.mlr.press/v202/dufumier23a.html | ICML 2023 | Data augmentation is a crucial component in unsupervised contrastive learning (CL). It determines how positive samples are defined and, ultimately, the quality of the learned representation. In this work, we open the door to new perspectives for CL by integrating prior knowledge, given either by generative models - viewed as prior representations - or weak attributes in the positive and negative sampling. To this end, we use kernel theory to propose a novel loss, called decoupled uniformity, that i) allows the integration of prior knowledge and ii) removes the positive-negative coupling in the original InfoNCE loss. We draw a connection between contrastive learning and the conditional mean embedding theory to derive tight bounds on the downstream classification loss. In an unsupervised setting, we empirically demonstrate that CL benefits from generative models to improve its representation both on natural and medical images. In a weakly supervised scenario, our framework outperforms other unconditional and conditional CL approaches. |
https://proceedings.mlr.press/v202/dugan23a.html | https://proceedings.mlr.press/v202/dugan23a/dugan23a.pdf | https://openreview.net/forum?id=6rqa493Sxf | Q-Flow: Generative Modeling for Differential Equations of Open Quantum Dynamics with Normalizing Flows | https://proceedings.mlr.press/v202/dugan23a.html | Owen M Dugan, Peter Y. Lu, Rumen Dangovski, Di Luo, Marin Soljacic | https://proceedings.mlr.press/v202/dugan23a.html | ICML 2023 | Studying the dynamics of open quantum systems can enable breakthroughs both in fundamental physics and applications to quantum engineering and quantum computation. Since the density matrix $\rho$, which is the fundamental description for the dynamics of such systems, is high-dimensional, customized deep generative neural networks have been instrumental in modeling $\rho$. However, the complex-valued nature and normalization constraints of $\rho$, as well as its complicated dynamics, prohibit a seamless connection between open quantum systems and the recent advances in deep generative modeling. Here we lift that limitation by utilizing a reformulation of open quantum system dynamics to a partial differential equation (PDE) for a corresponding probability distribution $Q$, the Husimi Q function. Thus, we model the Q function seamlessly with off-the-shelf deep generative models such as normalizing flows. Additionally, we develop novel methods for learning normalizing flow evolution governed by high-dimensional PDEs based on the Euler method and the application of the time-dependent variational principle. We name the resulting approach Q-Flow and demonstrate the scalability and efficiency of Q-Flow on open quantum system simulations, including the dissipative harmonic oscillator and the dissipative bosonic model. Q-Flow is superior to conventional PDE solvers and state-of-the-art physics-informed neural network solvers, especially in high-dimensional systems. |
https://proceedings.mlr.press/v202/duong23a.html | https://proceedings.mlr.press/v202/duong23a/duong23a.pdf | https://openreview.net/forum?id=cEWB5hABV5 | Adaptive Whitening in Neural Populations with Gain-modulating Interneurons | https://proceedings.mlr.press/v202/duong23a.html | Lyndon Duong, David Lipshutz, David Heeger, Dmitri Chklovskii, Eero P Simoncelli | https://proceedings.mlr.press/v202/duong23a.html | ICML 2023 | Statistical whitening transformations play a fundamental role in many computational systems, and may also play an important role in biological sensory systems. Existing neural circuit models of adaptive whitening operate by modifying synaptic interactions; however, such modifications would seem both too slow and insufficiently reversible. Motivated by the extensive neuroscience literature on gain modulation, we propose an alternative model that adaptively whitens its responses by modulating the gains of individual neurons. Starting from a novel whitening objective, we derive an online algorithm that whitens its outputs by adjusting the marginal variances of an overcomplete set of projections. We map the algorithm onto a recurrent neural network with fixed synaptic weights and gain-modulating interneurons. We demonstrate numerically that sign-constraining the gains improves robustness of the network to ill-conditioned inputs, and a generalization of the circuit achieves a form of local whitening in convolutional populations, such as those found throughout the visual or auditory systems. |
https://proceedings.mlr.press/v202/dupuis23a.html | https://proceedings.mlr.press/v202/dupuis23a/dupuis23a.pdf | https://openreview.net/forum?id=uR9GFnJ4IL | Generalization Bounds using Data-Dependent Fractal Dimensions | https://proceedings.mlr.press/v202/dupuis23a.html | Benjamin Dupuis, George Deligiannidis, Umut Simsekli | https://proceedings.mlr.press/v202/dupuis23a.html | ICML 2023 | Providing generalization guarantees for modern neural networks has been a crucial task in statistical learning. Recently, several studies have attempted to analyze the generalization error in such settings by using tools from fractal geometry. While these works have successfully introduced new mathematical tools to apprehend generalization, they heavily rely on a Lipschitz continuity assumption, which in general does not hold for neural networks and might make the bounds vacuous. In this work, we address this issue and prove fractal geometry-based generalization bounds without requiring any Lipschitz assumption. To achieve this goal, we build up on a classical covering argument in learning theory and introduce a data-dependent fractal dimension. Despite introducing a significant amount of technical complications, this new notion lets us control the generalization error (over either fixed or random hypothesis spaces) along with certain mutual information (MI) terms. To provide a clearer interpretation to the newly introduced MI terms, as a next step, we introduce a notion of ‘geometric stability’ and link our bounds to the prior art. Finally, we make a rigorous connection between the proposed data-dependent dimension and topological data analysis tools, which then enables us to compute the dimension in a numerically efficient way. We support our theory with experiments conducted on various settings. |
https://proceedings.mlr.press/v202/dushatskiy23a.html | https://proceedings.mlr.press/v202/dushatskiy23a/dushatskiy23a.pdf | https://openreview.net/forum?id=79sTydcg7o | Multi-Objective Population Based Training | https://proceedings.mlr.press/v202/dushatskiy23a.html | Arkadiy Dushatskiy, Alexander Chebykin, Tanja Alderliesten, Peter Bosman | https://proceedings.mlr.press/v202/dushatskiy23a.html | ICML 2023 | Population Based Training (PBT) is an efficient hyperparameter optimization algorithm. PBT is a single-objective algorithm, but many real-world hyperparameter optimization problems involve two or more conflicting objectives. In this work, we therefore introduce a multi-objective version of PBT, MO-PBT. Our experiments on diverse multi-objective hyperparameter optimization problems (Precision/Recall, Accuracy/Fairness, Accuracy/Adversarial Robustness) show that MO-PBT outperforms random search, single-objective PBT, and the state-of-the-art multi-objective hyperparameter optimization algorithm MO-ASHA. |
https://proceedings.mlr.press/v202/dutordoir23a.html | https://proceedings.mlr.press/v202/dutordoir23a/dutordoir23a.pdf | https://openreview.net/forum?id=tV7GSY5GYG | Neural Diffusion Processes | https://proceedings.mlr.press/v202/dutordoir23a.html | Vincent Dutordoir, Alan Saul, Zoubin Ghahramani, Fergus Simpson | https://proceedings.mlr.press/v202/dutordoir23a.html | ICML 2023 | Neural network approaches for meta-learning distributions over functions have desirable properties such as increased flexibility and a reduced complexity of inference. Building on the successes of denoising diffusion models for generative modelling, we propose Neural Diffusion Processes (NDPs), a novel approach that learns to sample from a rich distribution over functions through its finite marginals. By introducing a custom attention block we are able to incorporate properties of stochastic processes, such as exchangeability, directly into the NDP’s architecture. We empirically show that NDPs can capture functional distributions close to the true Bayesian posterior, demonstrating that they can successfully emulate the behaviour of Gaussian processes and surpass the performance of neural processes. NDPs enable a variety of downstream tasks, including regression, implicit hyperparameter marginalisation, non-Gaussian posterior prediction and global optimisation. |
https://proceedings.mlr.press/v202/duval23a.html | https://proceedings.mlr.press/v202/duval23a/duval23a.pdf | https://openreview.net/forum?id=HRDRZNxQXc | FAENet: Frame Averaging Equivariant GNN for Materials Modeling | https://proceedings.mlr.press/v202/duval23a.html | Alexandre Agm Duval, Victor Schmidt, Alex Hernández-Garcı́a, Santiago Miret, Fragkiskos D. Malliaros, Yoshua Bengio, David Rolnick | https://proceedings.mlr.press/v202/duval23a.html | ICML 2023 | Applications of machine learning techniques for materials modeling typically involve functions that are known to be equivariant or invariant to specific symmetries. While graph neural networks (GNNs) have proven successful in such applications, conventional GNN approaches that enforce symmetries via the model architecture often reduce expressivity, scalability or comprehensibility. In this paper, we introduce (1) a flexible, model-agnostic framework based on stochastic frame averaging that enforces E(3) equivariance or invariance, without any architectural constraints; (2) FAENet: a simple, fast and expressive GNN that leverages stochastic frame averaging to process geometric information without constraints. We prove the validity of our method theoretically and demonstrate its superior accuracy and computational scalability in materials modeling on the OC20 dataset (S2EF, IS2RE) as well as common molecular modeling tasks (QM9, QM7-X). |
https://proceedings.mlr.press/v202/santos23a.html | https://proceedings.mlr.press/v202/santos23a/santos23a.pdf | https://openreview.net/forum?id=lpc5vlfxp8 | Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces | https://proceedings.mlr.press/v202/santos23a.html | Javier E. Santos, Zachary R. Fox, Nicholas Lubbers, Yen Ting Lin | https://proceedings.mlr.press/v202/santos23a.html | ICML 2023 | Typical generative diffusion models rely on a Gaussian diffusion process for training the backward transformations, which can then be used to generate samples from Gaussian noise. However, real world data often takes place in discrete-state spaces, including many scientific applications. Here, we develop a theoretical formulation for arbitrary discrete-state Markov processes in the forward diffusion process using exact (as opposed to variational) analysis. We relate the theory to the existing continuous-state Gaussian diffusion as well as other approaches to discrete diffusion, and identify the corresponding reverse-time stochastic process and score function in the continuous-time setting, and the reverse-time mapping in the discrete-time setting. As an example of this framework, we introduce “Blackout Diffusion”, which learns to produce samples from an empty image instead of from noise. Numerical experiments on the CIFAR-10, Binarized MNIST, and CelebA datasets confirm the feasibility of our approach. Generalizing from specific (Gaussian) forward processes to discrete-state processes without a variational approximation sheds light on how to interpret diffusion models, which we discuss. |
https://proceedings.mlr.press/v202/eiben23a.html | https://proceedings.mlr.press/v202/eiben23a/eiben23a.pdf | https://openreview.net/forum?id=1Ntj4pRk7E | The Computational Complexity of Concise Hypersphere Classification | https://proceedings.mlr.press/v202/eiben23a.html | Eduard Eiben, Robert Ganian, Iyad A. Kanj, Sebastian Ordyniak, Stefan Szeider | https://proceedings.mlr.press/v202/eiben23a.html | ICML 2023 | Hypersphere classification is a classical and foundational method that can provide easy-to-process explanations for the classification of real-valued as well as binary data. However, obtaining an (ideally concise) explanation via hypersphere classification is much more difficult when dealing with binary data as opposed to real-valued data. In this paper, we perform the first complexity-theoretic study of the hypersphere classification problem for binary data. We use the fine-grained parameterized complexity paradigm to analyze the impact of structural properties that may be present in the input data as well as potential conciseness constraints. Our results include not only stronger lower bounds but also a number of new fixed-parameter algorithms for hypersphere classification of binary data, which can find an exact and concise explanation when one exists. |
https://proceedings.mlr.press/v202/eijkelboom23a.html | https://proceedings.mlr.press/v202/eijkelboom23a/eijkelboom23a.pdf | https://openreview.net/forum?id=hF65aKF8Bf | E$(n)$ Equivariant Message Passing Simplicial Networks | https://proceedings.mlr.press/v202/eijkelboom23a.html | Floor Eijkelboom, Rob Hesselink, Erik J Bekkers | https://proceedings.mlr.press/v202/eijkelboom23a.html | ICML 2023 | This paper presents $\mathrm{E}(n)$ Equivariant Message Passing Simplicial Networks (EMPSNs), a novel approach to learning on geometric graphs and point clouds that is equivariant to rotations, translations, and reflections. EMPSNs can learn high-dimensional simplex features in graphs (e.g. triangles), and use the increase of geometric information of higher-dimensional simplices in an $\mathrm{E}(n)$ equivariant fashion. EMPSNs simultaneously generalize $\mathrm{E}(n)$ Equivariant Graph Neural Networks to a topologically more elaborate counterpart and provide an approach for including geometric information in Message Passing Simplicial Networks, thereby serving as a proof of concept for combining geometric and topological information in graph learning. The results indicate that EMPSNs can leverage the benefits of both approaches, leading to a general increase in performance when compared to either method individually, being on par with state-of-the-art approaches for learning on geometric graphs. Moreover, the results suggest that incorporating geometric information serves as an effective measure against over-smoothing in message passing networks, especially when operating on high-dimensional simplicial structures. |
https://proceedings.mlr.press/v202/eilat23a.html | https://proceedings.mlr.press/v202/eilat23a/eilat23a.pdf | https://openreview.net/forum?id=k24luy3Azi | Performative Recommendation: Diversifying Content via Strategic Incentives | https://proceedings.mlr.press/v202/eilat23a.html | Itay Eilat, Nir Rosenfeld | https://proceedings.mlr.press/v202/eilat23a.html | ICML 2023 | The primary goal in recommendation is to suggest relevant content to users, but optimizing for accuracy often results in recommendations that lack diversity. To remedy this, conventional approaches such as re-ranking improve diversity by presenting more diverse items. Here we argue that to promote inherent and prolonged diversity, the system must encourage its creation. Towards this, we harness the performative nature of recommendation, and show how learning can incentivize strategic content creators to create diverse content. Our approach relies on a novel form of regularization that anticipates strategic changes to content, and penalizes for content homogeneity. We provide analytic and empirical results that demonstrate when and how diversity can be incentivized, and experimentally demonstrate the utility of our approach on synthetic and semi-synthetic data. |
https://proceedings.mlr.press/v202/eimer23a.html | https://proceedings.mlr.press/v202/eimer23a/eimer23a.pdf | https://openreview.net/forum?id=0Vm8Ghcxmp | Hyperparameters in Reinforcement Learning and How To Tune Them | https://proceedings.mlr.press/v202/eimer23a.html | Theresa Eimer, Marius Lindauer, Roberta Raileanu | https://proceedings.mlr.press/v202/eimer23a.html | ICML 2023 | In order to improve reproducibility, deep reinforcement learning (RL) has been adopting better scientific practices such as standardized evaluation metrics and reporting. However, the process of hyperparameter optimization still varies widely across papers, which makes it challenging to compare RL algorithms fairly. In this paper, we show that hyperparameter choices in RL can significantly affect the agent’s final performance and sample efficiency, and that the hyperparameter landscape can strongly depend on the tuning seed which may lead to overfitting. We therefore propose adopting established best practices from AutoML, such as the separation of tuning and testing seeds, as well as principled hyperparameter optimization (HPO) across a broad search space. We support this by comparing multiple state-of-the-art HPO tools on a range of RL algorithms and environments to their hand-tuned counterparts, demonstrating that HPO approaches often have higher performance and lower compute overhead. As a result of our findings, we recommend a set of best practices for the RL community, which should result in stronger empirical results with fewer computational costs, better reproducibility, and thus faster progress. In order to encourage the adoption of these practices, we provide plug-and-play implementations of the tuning algorithms used in this paper at https://github.com/facebookresearch/how-to-autorl. |
https://proceedings.mlr.press/v202/el-halabi23a.html | https://proceedings.mlr.press/v202/el-halabi23a/el-halabi23a.pdf | https://openreview.net/forum?id=KrsaROSs8b | Fairness in Streaming Submodular Maximization over a Matroid Constraint | https://proceedings.mlr.press/v202/el-halabi23a.html | Marwa El Halabi, Federico Fusco, Ashkan Norouzi-Fard, Jakab Tardos, Jakub Tarnawski | https://proceedings.mlr.press/v202/el-halabi23a.html | ICML 2023 | Streaming submodular maximization is a natural model for the task of selecting a representative subset from a large-scale dataset. If datapoints have sensitive attributes such as gender or race, it becomes important to enforce fairness to avoid bias and discrimination. This has spurred significant interest in developing fair machine learning algorithms. Recently, such algorithms have been developed for monotone submodular maximization under a cardinality constraint. In this paper, we study the natural generalization of this problem to a matroid constraint. We give streaming algorithms as well as impossibility results that provide trade-offs between efficiency, quality and fairness. We validate our findings empirically on a range of well-known real-world applications: exemplar-based clustering, movie recommendation, and maximum coverage in social networks. |
https://proceedings.mlr.press/v202/el-halabi23b.html | https://proceedings.mlr.press/v202/el-halabi23b/el-halabi23b.pdf | https://openreview.net/forum?id=e1lKKjkNMj | Difference of submodular minimization via DC programming | https://proceedings.mlr.press/v202/el-halabi23b.html | Marwa El Halabi, George Orfanides, Tim Hoheisel | https://proceedings.mlr.press/v202/el-halabi23b.html | ICML 2023 | Minimizing the difference of two submodular (DS) functions is a problem that naturally occurs in various machine learning problems. Although it is well known that a DS problem can be equivalently formulated as the minimization of the difference of two convex (DC) functions, existing algorithms do not fully exploit this connection. A classical algorithm for DC problems is called the DC algorithm (DCA). We introduce variants of DCA and its complete form (CDCA) that we apply to the DC program corresponding to DS minimization. We extend existing convergence properties of DCA, and connect them to convergence properties on the DS problem. Our results on DCA match the theoretical guarantees satisfied by existing DS algorithms, while providing a more complete characterization of convergence properties. In the case of CDCA, we obtain a stronger local minimality guarantee. Our numerical results show that our proposed algorithms outperform existing baselines on two applications: speech corpus selection and feature selection. |
https://proceedings.mlr.press/v202/eliasof23a.html | https://proceedings.mlr.press/v202/eliasof23a/eliasof23a.pdf | https://openreview.net/forum?id=1Nx2n1lk5T | Graph Positional Encoding via Random Feature Propagation | https://proceedings.mlr.press/v202/eliasof23a.html | Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron | https://proceedings.mlr.press/v202/eliasof23a.html | ICML 2023 | Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two augmentation schemes. Here we propose a novel family of positional encoding schemes which draws a link between the above two approaches and improves over both. The new approach, named Random Feature Propagation (RFP), is inspired by the power iteration method and its generalizations. It concatenates several intermediate steps of an iterative algorithm for computing the dominant eigenvectors of a propagation matrix, starting from random node features. Notably, these propagation steps are based on graph-dependent propagation operators that can be either predefined or learned. We explore the theoretical and empirical benefits of RFP. First, we provide theoretical justifications for using random features, for incorporating early propagation steps, and for using multiple random initializations. Then, we empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks. |
https://proceedings.mlr.press/v202/eliasof23b.html | https://proceedings.mlr.press/v202/eliasof23b/eliasof23b.pdf | https://openreview.net/forum?id=5jFy5MQvUj | Improving Graph Neural Networks with Learnable Propagation Operators | https://proceedings.mlr.press/v202/eliasof23b.html | Moshe Eliasof, Lars Ruthotto, Eran Treister | https://proceedings.mlr.press/v202/eliasof23b.html | ICML 2023 | Graph Neural Networks (GNNs) are limited in their propagation operators. In many cases, these operators often contain non-negative elements only and are shared across channels, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge these gaps by incorporating trainable channel-wise weighting factors $\omega$ to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called $\omega$GNN, and is easy to implement. We study two variants: $\omega$GCN and $\omega$GAT. For $\omega$GCN, we theoretically analyse its behaviour and the impact of $\omega$ on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our $\omega$GCN and $\omega$GAT perform on par with state-of-the-art methods. |
https://proceedings.mlr.press/v202/elimelech23a.html | https://proceedings.mlr.press/v202/elimelech23a/elimelech23a.pdf | https://openreview.net/forum?id=0NsrPtxeou | Phase Transitions in the Detection of Correlated Databases | https://proceedings.mlr.press/v202/elimelech23a.html | Dor Elimelech, Wasim Huleihel | https://proceedings.mlr.press/v202/elimelech23a.html | ICML 2023 | We study the problem of detecting the correlation between two Gaussian databases $\mathsf{X}\in\mathbb{R}^{n\times d}$ and $\mathsf{Y}^{n\times d}$, each composed of $n$ users with $d$ features. This problem is relevant in the analysis of social media, computational biology, etc. We formulate this as a hypothesis testing problem: under the null hypothesis, these two databases are statistically independent. Under the alternative, however, there exists an unknown permutation $\sigma$ over the set of $n$ users (or, row permutation), such that $\mathsf{X}$ is $\rho$-correlated with $\mathsf{Y}^\sigma$, a permuted version of $\mathsf{Y}$. We determine sharp thresholds at which optimal testing exhibits a phase transition, depending on the asymptotic regime of $n$ and $d$. Specifically, we prove that if $\rho^2d\to0$, as $d\to\infty$, then weak detection (performing slightly better than random guessing) is statistically impossible, irrespectively of the value of $n$. This compliments the performance of a simple test that thresholds the sum all entries of $\mathsf{X}^T\mathsf{Y}$. Furthermore, when $d$ is fixed, we prove that strong detection (vanishing error probability) is impossible for any $\rho<\rho^\star$, where $\rho^\star$ is an explicit function of $d$, while weak detection is again impossible as long as $\rho^2d=o(1)$, as $n\to\infty$. These results close significant gaps in current recent related studies. |
https://proceedings.mlr.press/v202/elkin23a.html | https://proceedings.mlr.press/v202/elkin23a/elkin23a.pdf | https://openreview.net/forum?id=8O0oxJmj0N | A new near-linear time algorithm for k-nearest neighbor search using a compressed cover tree | https://proceedings.mlr.press/v202/elkin23a.html | Yury Elkin, Vitaliy Kurlin | https://proceedings.mlr.press/v202/elkin23a.html | ICML 2023 | Given a reference set R of n points and a query set Q of m points in a metric space, this paper studies an important problem of finding k-nearest neighbors of every point q of Q in the set R in a near-linear time. In the paper at ICML 2006, Beygelzimer, Kakade, and Langford introduced a cover tree and attempted to prove that this tree can be built in O(n log n) time while the nearest neighbor search can be done O(n log m) time with a hidden dimensionality factor. In 2015, section 5.3 of Curtin’s PhD pointed out that the proof of the latter claim can have a serious gap in time complexity estimation. A paper at TopoInVis 2022 reported explicit counterexamples for a key step in the proofs of both claims. The past obstacles will be overcome by a simpler compressed cover tree on the reference set R. The first new algorithm constructs a compressed cover tree in O(n log n) time. The second new algorithm finds all k-nearest neighbors of all points from Q using a compressed cover tree in time O(m(k+log n)log k) with a hidden dimensionality factor depending on point distributions of the sets R,Q but not on their sizes. |
https://proceedings.mlr.press/v202/endo23a.html | https://proceedings.mlr.press/v202/endo23a/endo23a.pdf | https://openreview.net/forum?id=VTM0Bq7CWW | Motion Question Answering via Modular Motion Programs | https://proceedings.mlr.press/v202/endo23a.html | Mark Endo, Joy Hsu, Jiaman Li, Jiajun Wu | https://proceedings.mlr.press/v202/endo23a.html | ICML 2023 | In order to build artificial intelligence systems that can perceive and reason with human behavior in the real world, we must first design models that conduct complex spatio-temporal reasoning over motion sequences. Moving towards this goal, we propose the HumanMotionQA task to evaluate complex, multi-step reasoning abilities of models on long-form human motion sequences. We generate a dataset of question-answer pairs that require detecting motor cues in small portions of motion sequences, reasoning temporally about when events occur, and querying specific motion attributes. In addition, we propose NSPose, a neuro-symbolic method for this task that uses symbolic reasoning and a modular design to ground motion through learning motion concepts, attribute neural operators, and temporal relations. We demonstrate the suitability of NSPose for the HumanMotionQA task, outperforming all baseline methods. |
https://proceedings.mlr.press/v202/enguehard23a.html | https://proceedings.mlr.press/v202/enguehard23a/enguehard23a.pdf | https://openreview.net/forum?id=WpeZu6WzTB | Learning Perturbations to Explain Time Series Predictions | https://proceedings.mlr.press/v202/enguehard23a.html | Joseph Enguehard | https://proceedings.mlr.press/v202/enguehard23a.html | ICML 2023 | Explaining predictions based on multivariate time series data carries the additional difficulty of handling not only multiple features, but also time dependencies. It matters not only what happened, but also when, and the same feature could have a very different impact on a prediction depending on this time information. Previous work has used perturbation-based saliency methods to tackle this issue, perturbing an input using a trainable mask to discover which features at which times are driving the predictions. However these methods introduce fixed perturbations, inspired from similar methods on static data, while there seems to be little motivation to do so on temporal data. In this work, we aim to explain predictions by learning not only masks, but also associated perturbations. We empirically show that learning these perturbations significantly improves the quality of these explanations on time series data. |
https://proceedings.mlr.press/v202/erez23a.html | https://proceedings.mlr.press/v202/erez23a/erez23a.pdf | https://openreview.net/forum?id=ILMHlUn4k6 | Regret Minimization and Convergence to Equilibria in General-sum Markov Games | https://proceedings.mlr.press/v202/erez23a.html | Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, Yishay Mansour | https://proceedings.mlr.press/v202/erez23a.html | ICML 2023 | An abundance of recent impossibility results establish that regret minimization in Markov games with adversarial opponents is both statistically and computationally intractable. Nevertheless, none of these results preclude the possibility of regret minimization under the assumption that all parties adopt the same learning procedure. In this work, we present the first (to our knowledge) algorithm for learning in general-sum Markov games that provides sublinear regret guarantees when executed by all agents. The bounds we obtain are for $\textit{swap regret}$, and thus, along the way, imply convergence to a $\textit{correlated}$ equilibrium. Our algorithm is decentralized, computationally efficient, and does not require any communication between agents. Our key observation is that online learning via policy optimization in Markov games essentially reduces to a form of $\textit{weighted}$ regret minimization, with $\textit{unknown}$ weights determined by the path length of the agents’ policy sequence. Consequently, controlling the path length leads to weighted regret objectives for which sufficiently adaptive algorithms provide sublinear regret guarantees. |
https://proceedings.mlr.press/v202/esposito23a.html | https://proceedings.mlr.press/v202/esposito23a/esposito23a.pdf | https://openreview.net/forum?id=jggvWZLEZa | Delayed Bandits: When Do Intermediate Observations Help? | https://proceedings.mlr.press/v202/esposito23a.html | Emmanuel Esposito, Saeed Masoudian, Hao Qiu, Dirk Van Der Hoeven, Nicolò Cesa-Bianchi, Yevgeny Seldin | https://proceedings.mlr.press/v202/esposito23a.html | ICML 2023 | We study a $K$-armed bandit with delayed feedback and intermediate observations. We consider a model, where intermediate observations have a form of a finite state, which is observed immediately after taking an action, whereas the loss is observed after an adversarially chosen delay. We show that the regime of the mapping of states to losses determines the complexity of the problem, irrespective of whether the mapping of actions to states is stochastic or adversarial. If the mapping of states to losses is adversarial, then the regret rate is of order $\sqrt{(K+d)T}$ (within log factors), where $T$ is the time horizon and $d$ is a fixed delay. This matches the regret rate of a $K$-armed bandit with delayed feedback and without intermediate observations, implying that intermediate observations are not helpful. However, if the mapping of states to losses is stochastic, we show that the regret grows at a rate of $\sqrt{\bigl(K+\min\{|\mathcal{S}|,d\}\bigr)T}$ (within log factors), implying that if the number $|\mathcal{S}|$ of states is smaller than the delay, then intermediate observations help. We also provide refined high-probability regret upper bounds for non-uniform delays, together with experimental validation of our algorithms. |
https://proceedings.mlr.press/v202/esteves23a.html | https://proceedings.mlr.press/v202/esteves23a/esteves23a.pdf | https://openreview.net/forum?id=HiKPaeowPB | Scaling Spherical CNNs | https://proceedings.mlr.press/v202/esteves23a.html | Carlos Esteves, Jean-Jacques Slotine, Ameesh Makadia | https://proceedings.mlr.press/v202/esteves23a.html | ICML 2023 | Spherical CNNs generalize CNNs to functions on the sphere, by using spherical convolutions as the main linear operation. The most accurate and efficient way to compute spherical convolutions is in the spectral domain (via the convolution theorem), which is still costlier than the usual planar convolutions. For this reason, applications of spherical CNNs have so far been limited to small problems that can be approached with low model capacity. In this work, we show how spherical CNNs can be scaled for much larger problems. To achieve this, we make critical improvements including novel variants of common model components, an implementation of core operations to exploit hardware accelerator characteristics, and application-specific input representations that exploit the properties of our model. Experiments show our larger spherical CNNs reach state-of-the-art on several targets of the QM9 molecular benchmark, which was previously dominated by equivariant graph neural networks, and achieve competitive performance on multiple weather forecasting tasks. Our code is available at https://github.com/google-research/spherical-cnn. |
https://proceedings.mlr.press/v202/even23a.html | https://proceedings.mlr.press/v202/even23a/even23a.pdf | https://openreview.net/forum?id=Wc7XppSfRo | Stochastic Gradient Descent under Markovian Sampling Schemes | https://proceedings.mlr.press/v202/even23a.html | Mathieu Even | https://proceedings.mlr.press/v202/even23a.html | ICML 2023 | We study a variation of vanilla stochastic gradient descent where the optimizer only has access to a Markovian sampling scheme. These schemes encompass applications that range from decentralized optimization with a random walker (token algorithms), to RL and online system identification problems. We focus on obtaining rates of convergence under the least restrictive assumptions possible on the underlying Markov chain and on the functions optimized. We first unveil the theoretical lower bound for methods that sample stochastic gradients along the path of a Markov chain, making appear a dependency in the hitting time of the underlying Markov chain. We then study Markov chain SGD (MC-SGD) under much milder regularity assumptions than prior works. We finally introduce MC-SAG, an alternative to MC-SGD with variance reduction, that only depends on the hitting time of the Markov chain, therefore obtaining a communication-efficient token algorithm. |
https://proceedings.mlr.press/v202/evron23a.html | https://proceedings.mlr.press/v202/evron23a/evron23a.pdf | https://openreview.net/forum?id=kkpIrMu3Vf | Continual Learning in Linear Classification on Separable Data | https://proceedings.mlr.press/v202/evron23a.html | Itay Evron, Edward Moroshko, Gon Buzaglo, Maroun Khriesh, Badea Marjieh, Nathan Srebro, Daniel Soudry | https://proceedings.mlr.press/v202/evron23a.html | ICML 2023 | We analyze continual learning on a sequence of separable linear classification tasks with binary labels. We show theoretically that learning with weak regularization reduces to solving a sequential max-margin problem, corresponding to a special case of the Projection Onto Convex Sets (POCS) framework. We then develop upper bounds on the forgetting and other quantities of interest under various settings with recurring tasks, including cyclic and random orderings of tasks. We discuss several practical implications to popular training practices like regularization scheduling and weighting. We point out several theoretical differences between our continual classification setting and a recently studied continual regression setting. |
https://proceedings.mlr.press/v202/eysenbach23a.html | https://proceedings.mlr.press/v202/eysenbach23a/eysenbach23a.pdf | https://openreview.net/forum?id=XXC601YWgq | A Connection between One-Step RL and Critic Regularization in Reinforcement Learning | https://proceedings.mlr.press/v202/eysenbach23a.html | Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov | https://proceedings.mlr.press/v202/eysenbach23a.html | ICML 2023 | As with any machine learning problem with limited data, effective offline RL algorithms require careful regularization to avoid overfitting. One class of methods, known as one-step RL, perform just one step of policy improvement. These methods, which include advantage-weighted regression and conditional behavioral cloning, are thus simple and stable, but can have limited asymptotic performance. A second class of methods, known as critic regularization, perform many steps of policy improvement with a regularized objective. These methods typically require more compute but have appealing lower-bound guarantees. In this paper, we draw a connection between these methods: applying a multi-step critic regularization method with a regularization coefficient of 1 yields the same policy as one-step RL. While our theoretical results require assumptions (e.g., deterministic dynamics), our experiments nevertheless show that our analysis makes accurate, testable predictions about practical offline RL methods (CQL and one-step RL) with commonly-used hyperparameters. |
https://proceedings.mlr.press/v202/faber23a.html | https://proceedings.mlr.press/v202/faber23a/faber23a.pdf | https://openreview.net/forum?id=mZUEThXS1s | Neural Status Registers | https://proceedings.mlr.press/v202/faber23a.html | Lukas Faber, Roger Wattenhofer | https://proceedings.mlr.press/v202/faber23a.html | ICML 2023 | We study the problem of learning comparisons between numbers with neural networks. Despite comparisons being a seemingly simple problem, we find that both general-purpose models such as multilayer perceptrons (MLPs) as well as arithmetic architectures such as the Neural Arithmetic Logic Unit (NALU) struggle with learning comparisons. Neither architecture can extrapolate to much larger numbers than those seen in the training set. We propose a novel differentiable architecture, the Neural Status Register (NSR) to solve this problem. We experimentally validate the NSR in various settings. We can combine the NSR with other neural models to solve interesting problems such as piecewise-defined arithmetic, comparison of digit images, recurrent problems, or finding shortest paths in graphs. The NSR outperforms all baseline architectures, especially when it comes to extrapolating to larger numbers. |
https://proceedings.mlr.press/v202/fahrbach23a.html | https://proceedings.mlr.press/v202/fahrbach23a/fahrbach23a.pdf | https://openreview.net/forum?id=mSofpvUxCL | Learning Rate Schedules in the Presence of Distribution Shift | https://proceedings.mlr.press/v202/fahrbach23a.html | Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah | https://proceedings.mlr.press/v202/fahrbach23a.html | ICML 2023 | We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution. We fully characterize the optimal learning rate schedule for online linear regression via a novel analysis with stochastic differential equations. For general convex loss functions, we propose new learning rate schedules that are robust to distribution shift, and give upper and lower bounds for the regret that only differ by constants. For non-convex loss functions, we define a notion of regret based on the gradient norm of the estimated models and propose a learning schedule that minimizes an upper bound on the total expected regret. Intuitively, one expects changing loss landscapes to require more exploration, and we confirm that optimal learning rate schedules typically have higher learning rates in the presence of distribution shift. Finally, we provide experiments that illustrate these learning rate schedules and their regret. |
https://proceedings.mlr.press/v202/faletto23a.html | https://proceedings.mlr.press/v202/faletto23a/faletto23a.pdf | https://openreview.net/forum?id=uSJP34JCTu | Predicting Rare Events by Shrinking Towards Proportional Odds | https://proceedings.mlr.press/v202/faletto23a.html | Gregory Faletto, Jacob Bien | https://proceedings.mlr.press/v202/faletto23a.html | ICML 2023 | Training classifiers is difficult with severe class imbalance, but many rare events are the culmination of a sequence with much more common intermediate outcomes. For example, in online marketing a user first sees an ad, then may click on it, and finally may make a purchase; estimating the probability of purchases is difficult because of their rarity. We show both theoretically and through data experiments that the more abundant data in earlier steps may be leveraged to improve estimation of probabilities of rare events. We present PRESTO, a relaxation of the proportional odds model for ordinal regression. Instead of estimating weights for one separating hyperplane that is shifted by separate intercepts for each of the estimated Bayes decision boundaries between adjacent pairs of categorical responses, we estimate separate weights for each of these transitions. We impose an L1 penalty on the differences between weights for the same feature in adjacent weight vectors in order to shrink towards the proportional odds model. We prove that PRESTO consistently estimates the decision boundary weights under a sparsity assumption. Synthetic and real data experiments show that our method can estimate rare probabilities in this setting better than both logistic regression on the rare category, which fails to borrow strength from more abundant categories, and the proportional odds model, which is too inflexible. |
https://proceedings.mlr.press/v202/fan23a.html | https://proceedings.mlr.press/v202/fan23a/fan23a.pdf | https://openreview.net/forum?id=o3cxCfmovG | Free-Form Variational Inference for Gaussian Process State-Space Models | https://proceedings.mlr.press/v202/fan23a.html | Xuhui Fan, Edwin V. Bonilla, Terence O’Kane, Scott A Sisson | https://proceedings.mlr.press/v202/fan23a.html | ICML 2023 | Gaussian process state-space models (GPSSMs) provide a principled and flexible approach to modeling the dynamics of a latent state, which is observed at discrete-time points via a likelihood model. However, inference in GPSSMs is computationally and statistically challenging due to the large number of latent variables in the model and the strong temporal dependencies between them. In this paper, we propose a new method for inference in Bayesian GPSSMs, which overcomes the drawbacks of previous approaches, namely over-simplified assumptions, and high computational requirements. Our method is based on free-form variational inference via stochastic gradient Hamiltonian Monte Carlo within the inducing-variable formalism. Furthermore, by exploiting our proposed variational distribution, we provide a collapsed extension of our method where the inducing variables are marginalized analytically. We also showcase results when combining our framework with particle MCMC methods. We show that, on six real-world datasets, our approach can learn transition dynamics and latent states more accurately than competing methods. |
https://proceedings.mlr.press/v202/fan23b.html | https://proceedings.mlr.press/v202/fan23b/fan23b.pdf | https://openreview.net/forum?id=3SiELE1Wzl | Optimizing DDPM Sampling with Shortcut Fine-Tuning | https://proceedings.mlr.press/v202/fan23b.html | Ying Fan, Kangwook Lee | https://proceedings.mlr.press/v202/fan23b.html | ICML 2023 | In this study, we propose Shortcut Fine-Tuning (SFT), a new approach for addressing the challenge of fast sampling of pretrained Denoising Diffusion Probabilistic Models (DDPMs). SFT advocates for the fine-tuning of DDPM samplers through the direct minimization of Integral Probability Metrics (IPM), instead of learning the backward diffusion process. This enables samplers to discover an alternative and more efficient sampling shortcut, deviating from the backward diffusion process. Inspired by a control perspective, we propose a new algorithm SFT-PG: Shortcut Fine-Tuning with Policy Gradient, and prove that under certain assumptions, gradient descent of diffusion models with respect to IPM is equivalent to performing policy gradient. To our best knowledge, this is the first attempt to utilize reinforcement learning (RL) methods to train diffusion models. Through empirical evaluation, we demonstrate that our fine-tuning method can further enhance existing fast DDPM samplers, resulting in sample quality comparable to or even surpassing that of the full-step model across various datasets. |
https://proceedings.mlr.press/v202/fan23c.html | https://proceedings.mlr.press/v202/fan23c/fan23c.pdf | https://openreview.net/forum?id=Im0XEixDmR | LSDS++ : Dual Sampling for Accelerated k-means++ | https://proceedings.mlr.press/v202/fan23c.html | Chenglin Fan, Ping Li, Xiaoyun Li | https://proceedings.mlr.press/v202/fan23c.html | ICML 2023 | k-means clustering is an important problem in machine learning and statistics. The k-means++ initialization algorithm has driven new acceleration strategies and theoretical analysis for solving the k-means clustering problem. The state-of-the-art variant, called LocalSearch++, adds extra local search steps upon k-means++ to achieve constant approximation error in expectation. In this paper, we propose a new variant named LSDS++, which improves the sampling efficiency of LocalSearch++ via a strategy called dual sampling. By defining a new capture graph based on the concept of coreset, we show that the proposed LSDS++ is able to achieve the same expected constant error with reduced complexity. Experiments are conducted to justify the benefit of LSDS++ in practice. |
https://proceedings.mlr.press/v202/fan23d.html | https://proceedings.mlr.press/v202/fan23d/fan23d.pdf | https://openreview.net/forum?id=Mha86sOok1 | Smart Initial Basis Selection for Linear Programs | https://proceedings.mlr.press/v202/fan23d.html | Zhenan Fan, Xinglu Wang, Oleksandr Yakovenko, Abdullah Ali Sivas, Owen Ren, Yong Zhang, Zirui Zhou | https://proceedings.mlr.press/v202/fan23d.html | ICML 2023 | The simplex method, introduced by Dantzig more than half a century ago, is still to date one of the most efficient methods for solving large-scale linear programming (LP) problems. While the simplex method is known to have the finite termination property under mild assumptions, the number of iterations until optimality largely depends on the choice of initial basis. Existing strategies for selecting an advanced initial basis are mostly rule-based. These rules usually require extensive expert knowledge and empirical study to develop. Yet, many of them fail to exhibit consistent improvement, even for LP problems that arise in a single application scenario. In this paper, we propose a learning-based approach for initial basis selection. We employ graph neural networks as a building block and develop a model that attempts to capture the relationship between LP problems and their optimal bases. In addition, during the inference phase, we supplement the learning-based prediction with linear algebra tricks to ensure the validity of the generated initial basis. We validate the effectiveness of our proposed strategy by extensively testing it with state-of-the-art simplex solvers, including the open-source solver HiGHS and the commercial solver OptVerse. Through these rigorous experiments, we demonstrate that our strategy achieves substantial speedup and consistently outperforms existing rule-based methods. Furthermore, we extend the proposed approach to generating restricted master problems for column generation methods and present encouraging numerical results. |
https://proceedings.mlr.press/v202/fanaskov23a.html | https://proceedings.mlr.press/v202/fanaskov23a/fanaskov23a.pdf | https://openreview.net/forum?id=glID3Vsmc0 | General Covariance Data Augmentation for Neural PDE Solvers | https://proceedings.mlr.press/v202/fanaskov23a.html | Vladimir Fanaskov, Tianchi Yu, Alexander Rudikov, Ivan Oseledets | https://proceedings.mlr.press/v202/fanaskov23a.html | ICML 2023 | The growing body of research shows how to replace classical partial differential equation (PDE) integrators with neural networks. The popular strategy is to generate the input-output pairs with a PDE solver, train the neural network in the regression setting, and use the trained model as a cheap surrogate for the solver. The bottleneck in this scheme is the number of expensive queries of a PDE solver needed to generate the dataset. To alleviate the problem, we propose a computationally cheap augmentation strategy based on general covariance and simple random coordinate transformations. Our approach relies on the fact that physical laws are independent of the coordinate choice, so the change in the coordinate system preserves the type of a parametric PDE and only changes PDE’s data (e.g., initial conditions, diffusion coefficient). For tried neural networks and partial differential equations, proposed augmentation improves test error by 23% on average. The worst observed result is a 17% increase in test error for multilayer perceptron, and the best case is a 80% decrease for dilated residual network. |
https://proceedings.mlr.press/v202/fandina23a.html | https://proceedings.mlr.press/v202/fandina23a/fandina23a.pdf | https://openreview.net/forum?id=XnV8dbrGI4 | The Fast Johnson-Lindenstrauss Transform Is Even Faster | https://proceedings.mlr.press/v202/fandina23a.html | Ora Nova Fandina, Mikael Møller Høgsgaard, Kasper Green Larsen | https://proceedings.mlr.press/v202/fandina23a.html | ICML 2023 | The Johnson-Lindenstaruss lemma (Johnson & Lindenstrauss, 1984) is a cornerstone result in dimensionality reduction, stating it is possible to embed a set of $n$ points in $d$-dimensional Euclidean space into optimal $k=O(\varepsilon^{-2} \ln n)$ dimensions, while preserving all pairwise distances to within a factor $(1 \pm \varepsilon)$. The seminal Fast Johnson-Lindenstrauss (Fast JL) transform by Ailon and Chazelle (SICOMP’09) supports computing the embedding of a data point in $O(d \ln d +k \ln^2 n)$ time, where the $d \ln d$ term comes from multiplication with a $d \times d$ Hadamard matrix and the $k \ln^2 n$ term comes from multiplication with a sparse $k \times d$ matrix. Despite the Fast JL transform being more than a decade old, it is one of the fastest dimensionality reduction techniques for many tradeoffs between $\varepsilon, d$ and $n$. In this work, we give a surprising new analysis of the Fast JL transform, showing that the $k \ln^2 n$ term in the embedding time can be improved to $(k \ln^2 n)/\alpha$ for an $\alpha = \Omega(\min\{\varepsilon^{-1}\ln(1/\varepsilon), \ln n\})$. The improvement follows by using an even sparser matrix. We complement our improved analysis with a lower bound showing that our new analysis is in fact tight. |
https://proceedings.mlr.press/v202/fang23a.html | https://proceedings.mlr.press/v202/fang23a/fang23a.pdf | https://openreview.net/forum?id=bLhaIGkEqc | Regression with Label Permutation in Generalized Linear Model | https://proceedings.mlr.press/v202/fang23a.html | Guanhua Fang, Ping Li | https://proceedings.mlr.press/v202/fang23a.html | ICML 2023 | The assumption that response and predictor belong to the same statistical unit may be violated in practice. Unbiased estimation and recovery of true label ordering based on unlabeled data are challenging tasks and have attracted increasing attentions in the recent literature. In this paper, we present a relatively complete analysis of label permutation problem for the generalized linear model with multivariate responses. The theory is established under different scenarios, with knowledge of true parameters, with partial knowledge of underlying label permutation matrix and without any knowledge. Our results remove the stringent conditions required by the current literature and are further extended to the missing observation setting which has never been considered in the field of label permutation problem. On computational side, we propose two methods, "maximum likelihood estimation" algorithm and "two-step estimation" algorithm, to accommodate for different settings. When the proportion of permuted labels is moderate, both methods work effectively. Multiple numerical experiments are provided and corroborate our theoretical findings. |
https://proceedings.mlr.press/v202/farhadkhani23a.html | https://proceedings.mlr.press/v202/farhadkhani23a/farhadkhani23a.pdf | https://openreview.net/forum?id=BkVWMrgb7K | Robust Collaborative Learning with Linear Gradient Overhead | https://proceedings.mlr.press/v202/farhadkhani23a.html | Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, John Stephan | https://proceedings.mlr.press/v202/farhadkhani23a.html | ICML 2023 | Collaborative learning algorithms, such as distributed SGD (or D-SGD), are prone to faulty machines that may deviate from their prescribed algorithm because of software or hardware bugs, poisoned data or malicious behaviors. While many solutions have been proposed to enhance the robustness of D-SGD to such machines, previous works either resort to strong assumptions (trusted server, homogeneous data, specific noise model) or impose a gradient computational cost that is several orders of magnitude higher than that of D-SGD. We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight. Essentially, MoNNA uses Polyak’s momentum of local gradients for local updates and nearest-neighbor averaging (NNA) for global mixing, respectively. While MoNNA is rather simple to implement, its analysis has been more challenging and relies on two key elements that may be of independent interest. Specifically, we introduce the mixing criterion of $(\alpha, \lambda)$-reduction to analyze the non-linear mixing of non-faulty machines, and present a way to control the tension between the momentum and the model drifts. We validate our theory by experiments on image classification and make our code available at https://github.com/LPD-EPFL/robust-collaborative-learning. |
https://proceedings.mlr.press/v202/fasina23a.html | https://proceedings.mlr.press/v202/fasina23a/fasina23a.pdf | https://openreview.net/forum?id=SarkXIzrXh | Neural FIM for learning Fisher information metrics from point cloud data | https://proceedings.mlr.press/v202/fasina23a.html | Oluwadamilola Fasina, Guillaume Huguet, Alexander Tong, Yanlei Zhang, Guy Wolf, Maximilian Nickel, Ian Adelstein, Smita Krishnaswamy | https://proceedings.mlr.press/v202/fasina23a.html | ICML 2023 | Although data diffusion embeddings are ubiquitous in unsupervised learning and have proven to be a viable technique for uncovering the underlying intrinsic geometry of data, diffusion embeddings are inherently limited due to their discrete nature. To this end, we propose neural FIM, a method for computing the Fisher information metric (FIM) from point cloud data - allowing for a continuous manifold model for the data. Neural FIM creates an extensible metric space from discrete point cloud data such that information from the metric can inform us of manifold characteristics such as volume and geodesics. We demonstrate Neural FIM’s utility in selecting parameters for the PHATE visualization method as well as its ability to obtain information pertaining to local volume illuminating branching points and cluster centers embeddings of a toy dataset and two single-cell datasets of IPSC reprogramming and PBMCs (immune cells). |
https://proceedings.mlr.press/v202/fatkhullin23a.html | https://proceedings.mlr.press/v202/fatkhullin23a/fatkhullin23a.pdf | https://openreview.net/forum?id=kgxO5itnvU | Stochastic Policy Gradient Methods: Improved Sample Complexity for Fisher-non-degenerate Policies | https://proceedings.mlr.press/v202/fatkhullin23a.html | Ilyas Fatkhullin, Anas Barakat, Anastasia Kireeva, Niao He | https://proceedings.mlr.press/v202/fatkhullin23a.html | ICML 2023 | Recently, the impressive empirical success of policy gradient (PG) methods has catalyzed the development of their theoretical foundations. Despite the huge efforts directed at the design of efficient stochastic PG-type algorithms, the understanding of their convergence to a globally optimal policy is still limited. In this work, we develop improved global convergence guarantees for a general class of Fisher-non-degenerate parameterized policies which allows to address the case of continuous state action spaces. First, we propose a Normalized Policy Gradient method with Implicit Gradient Transport (N-PG-IGT) and derive a $\tilde{\mathcal{O}}(\varepsilon^{-2.5})$ sample complexity of this method for finding a global $\varepsilon$-optimal policy. Improving over the previously known $\tilde{\mathcal{O}}(\varepsilon^{-3})$ complexity, this algorithm does not require the use of importance sampling or second-order information and samples only one trajectory per iteration. Second, we further improve this complexity to $\tilde{ \mathcal{\mathcal{O}} }(\varepsilon^{-2})$ by considering a Hessian-Aided Recursive Policy Gradient ((N)-HARPG) algorithm enhanced with a correction based on a Hessian-vector product. Interestingly, both algorithms are $(i)$ simple and easy to implement: single-loop, do not require large batches of trajectories and sample at most two trajectories per iteration; $(ii)$ computationally and memory efficient: they do not require expensive subroutines at each iteration and can be implemented with memory linear in the dimension of parameters. |
https://proceedings.mlr.press/v202/feldstein23a.html | https://proceedings.mlr.press/v202/feldstein23a/feldstein23a.pdf | https://openreview.net/forum?id=zbYo7Ay4Mt | Parallel Neurosymbolic Integration with Concordia | https://proceedings.mlr.press/v202/feldstein23a.html | Jonathan Feldstein, Modestas Jurčius, Efthymia Tsamoura | https://proceedings.mlr.press/v202/feldstein23a.html | ICML 2023 | Parallel neurosymbolic architectures have been applied effectively in NLP by distilling knowledge from a logic theory into a deep model. However, prior art faces several limitations including supporting restricted forms of logic theories and relying on the assumption of independence between the logic and the deep network. We present Concordia, a framework overcoming the limitations of prior art. Concordia is agnostic both to the deep network and the logic theory offering support for a wide range of probabilistic theories. Our framework can support supervised training of both components and unsupervised training of the neural component. Concordia has been successfully applied to tasks beyond NLP and data classification, improving the accuracy of state-of-the-art on collective activity detection, entity linking and recommendation tasks. |
https://proceedings.mlr.press/v202/fellows23a.html | https://proceedings.mlr.press/v202/fellows23a/fellows23a.pdf | https://openreview.net/forum?id=kjyXxKw4uI | Why Target Networks Stabilise Temporal Difference Methods | https://proceedings.mlr.press/v202/fellows23a.html | Mattie Fellows, Matthew J. A. Smith, Shimon Whiteson | https://proceedings.mlr.press/v202/fellows23a.html | ICML 2023 | Integral to recent successes in deep reinforcement learning has been a class of temporal difference methods that use infrequently updated target values for policy evaluation in a Markov Decision Process. Yet a complete theoretical explanation for the effectiveness of target networks remains elusive. In this work, we provide an analysis of this popular class of algorithms, to finally answer the question: “why do target networks stabilise TD learning”? To do so, we formalise the notion of a partially fitted policy evaluation method, which describes the use of target networks and bridges the gap between fitted methods and semigradient temporal difference algorithms. Using this framework we are able to uniquely characterise the so-called deadly triad–the use of TD updates with (nonlinear) function approximation and off-policy data–which often leads to nonconvergent algorithms.This insight leads us to conclude that the use of target networks can mitigate the effects of poor conditioning in the Jacobian of the TD update. Instead, we show that under mild regularity con- ditions and a well tuned target network update frequency, convergence can be guaranteed even in the extremely challenging off-policy sampling and nonlinear function approximation setting. |
https://proceedings.mlr.press/v202/feng23a.html | https://proceedings.mlr.press/v202/feng23a/feng23a.pdf | https://openreview.net/forum?id=Tv0WUyygoe | Weighted Sampling without Replacement for Deep Top-$k$ Classification | https://proceedings.mlr.press/v202/feng23a.html | Dieqiao Feng, Yuanqi Du, Carla P Gomes, Bart Selman | https://proceedings.mlr.press/v202/feng23a.html | ICML 2023 | The top-$k$ classification accuracy is a crucial metric in machine learning and is often used to evaluate the performance of deep neural networks. These networks are typically trained using the cross-entropy loss, which optimizes for top-$1$ classification and is considered optimal in the case of infinite data. However, in real-world scenarios, data is often noisy and limited, leading to the need for more robust losses. In this paper, we propose using the Weighted Sampling Without Replacement (WSWR) method as a learning objective for top-$k$ loss. While traditional methods for evaluating WSWR-based top-$k$ loss are computationally impractical, we show a novel connection between WSWR and Reinforcement Learning (RL) and apply well-established RL algorithms to estimate gradients. We compared our method with recently proposed top-$k$ losses in various regimes of noise and data size for the prevalent use case of $k = 5$. Our experimental results reveal that our method consistently outperforms all other methods on the top-$k$ metric for noisy datasets, has more robustness on extreme testing scenarios, and achieves competitive results on training with limited data. |
https://proceedings.mlr.press/v202/feng23b.html | https://proceedings.mlr.press/v202/feng23b/feng23b.pdf | https://openreview.net/forum?id=rB0VaD44FZ | Improved Online Learning Algorithms for CTR Prediction in Ad Auctions | https://proceedings.mlr.press/v202/feng23b.html | Zhe Feng, Christopher Liaw, Zixin Zhou | https://proceedings.mlr.press/v202/feng23b.html | ICML 2023 | In this work, we investigate the online learning problem of revenue maximization in ad auctions, where the seller needs to learn the click-through rates (CTRs) of each ad candidate and charge the price of the winner through a pay-per-click manner. We focus on two models of the advertisers’ strategic behaviors. First, we assume that the advertiser is completely myopic; i.e. in each round, they aim to maximize their utility only for the current round. In this setting, we develop an online mechanism based on upper-confidence bounds that achieves a tight $O(\sqrt{T})$ regret in the worst-case and negative regret when the values are static across all the auctions and there is a gap between the highest expected value (i.e. value multiplied by their CTR) and second highest expected value ad. Next, we assume that the advertiser is non-myopic and cares about their long term utility. This setting is much more complex since an advertiser is incentivized to influence the mechanism by bidding strategically in earlier rounds. In this setting, we provide an algorithm to achieve negative regret for the static valuation setting (with a positive gap), which is in sharp contrast with the prior work that shows $O(T^{2/3})$ regret when the valuation is generated by adversary. |
https://proceedings.mlr.press/v202/feng23c.html | https://proceedings.mlr.press/v202/feng23c/feng23c.pdf | https://openreview.net/forum?id=vH6cWEqceA | Fractional Denoising for 3D Molecular Pre-training | https://proceedings.mlr.press/v202/feng23c.html | Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, Wei-Ying Ma | https://proceedings.mlr.press/v202/feng23c.html | ICML 2023 | Coordinate denoising is a promising 3D molecular pre-training method, which has achieved remarkable performance in various downstream drug discovery tasks. Theoretically, the objective is equivalent to learning the force field, which is revealed helpful for downstream tasks. Nevertheless, there are two challenges for coordinate denoising to learn an effective force field, i.e. low coverage samples and isotropic force field. The underlying reason is that molecular distributions assumed by existing denoising methods fail to capture the anisotropic characteristic of molecules. To tackle these challenges, we propose a novel hybrid noise strategy, including noises on both dihedral angel and coordinate. However, denoising such hybrid noise in a traditional way is no more equivalent to learning the force field. Through theoretical deductions, we find that the problem is caused by the dependency of the input conformation for covariance. To this end, we propose to decouple the two types of noise and design a novel fractional denoising method (Frad), which only denoises the latter coordinate part. In this way, Frad enjoys both the merits of sampling more low-energy structures and the force field equivalence. Extensive experiments show the effectiveness of Frad in molecule representation, with a new state-of-the-art on 9 out of 12 tasks of QM9 and on 7 out of 8 targets of MD17. |
https://proceedings.mlr.press/v202/feng23d.html | https://proceedings.mlr.press/v202/feng23d/feng23d.pdf | https://openreview.net/forum?id=7EvberozFP | Improved Algorithms for White-Box Adversarial Streams | https://proceedings.mlr.press/v202/feng23d.html | Ying Feng, David Woodruff | https://proceedings.mlr.press/v202/feng23d.html | ICML 2023 | We study streaming algorithms in the white-box adversarial stream model, where the internal state of the streaming algorithm is revealed to an adversary who adaptively generates the stream updates, but the algorithm obtains fresh randomness unknown to the adversary at each time step. We incorporate cryptographic assumptions to construct robust algorithms against such adversaries. We propose efficient algorithms for sparse recovery of vectors, low rank recovery of matrices and tensors, as well as low rank plus sparse recovery of matrices, i.e., robust PCA. Unlike deterministic algorithms, our algorithms can report when the input is not sparse or low rank even in the presence of such an adversary. We use these recovery algorithms to improve upon and solve new problems in numerical linear algebra and combinatorial optimization on white-box adversarial streams. For example, we give the first efficient algorithm for outputting a matching in a graph with insertions and deletions to its edges provided the matching size is small, and otherwise we declare the matching size is large. We also improve the approximation versus memory tradeoff of previous work for estimating the number of non-zero elements in a vector and computing the matrix rank. |
https://proceedings.mlr.press/v202/feng23e.html | https://proceedings.mlr.press/v202/feng23e/feng23e.pdf | https://openreview.net/forum?id=KmJo2sqppO | Non-stationary Reinforcement Learning under General Function Approximation | https://proceedings.mlr.press/v202/feng23e.html | Songtao Feng, Ming Yin, Ruiquan Huang, Yu-Xiang Wang, Jing Yang, Yingbin Liang | https://proceedings.mlr.press/v202/feng23e.html | ICML 2023 | General function approximation is a powerful tool to handle large state and action spaces in a broad range of reinforcement learning (RL) scenarios. However, theoretical understanding of non-stationary MDPs with general function approximation is still limited. In this paper, we make the first such an attempt. We first propose a new complexity metric called dynamic Bellman Eluder (DBE) dimension for non-stationary MDPs, which subsumes majority of existing tractable RL problems in static MDPs as well as non-stationary MDPs. Based on the proposed complexity metric, we propose a novel confidence-set based model-free algorithm called SW-OPEA, which features a sliding window mechanism and a new confidence set design for non-stationary MDPs. We then establish an upper bound on the dynamic regret for the proposed algorithm, and show that SW-OPEA is provably efficient as long as the variation budget is not significantly large. We further demonstrate via examples of non-stationary linear and tabular MDPs that our algorithm performs better in small variation budget scenario than the existing UCB-type algorithms. To the best of our knowledge, this is the first dynamic regret analysis in non-stationary MDPs with general function approximation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.