title
stringlengths 15
138
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 7
526
| tags
stringclasses 3
values | abstract
stringlengths 480
3.09k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Demystifying Poisoning Backdoor Attacks from a Statistical Perspective | https://openreview.net/forum?id=BPHcEpGvF8 | https://openreview.net/forum?id=BPHcEpGvF8 | Ganghua Wang,Xun Xian,Ashish Kundu,Jayanth Srinivasa,Xuan Bi,Mingyi Hong,Jie Ding | ICLR 2024,Poster | Backdoor attacks pose a significant security risk to machine learning applications due to their stealthy nature and potentially serious consequences. Such attacks involve embedding triggers within a learning model with the intention of causing malicious behavior when an active trigger is present while maintaining regular functionality without it. This paper derives a fundamental understanding of backdoor attacks that applies to both discriminative and generative models, including diffusion models and large language models. We evaluate the effectiveness of any backdoor attack incorporating a constant trigger, by establishing tight lower and upper boundaries for the performance of the compromised model on both clean and backdoor test data. The developed theory answers a series of fundamental but previously underexplored problems, including (1) what are the determining factors for a backdoor attack's success, (2) what is the direction of the most effective backdoor attack, and (3) when will a human-imperceptible trigger succeed. We demonstrate the theory by conducting experiments using benchmark datasets and state-of-the-art backdoor attack scenarios. Our code is available \href{https://github.com/KeyWgh/DemystifyBackdoor}{here}. | https://openreview.net/pdf/2b97a4885cd767d5e6fad5ceeb1c8e5da20147c4.pdf |
Learning to Make Adherence-aware Advice | https://openreview.net/forum?id=RgELE1dQXx | https://openreview.net/forum?id=RgELE1dQXx | Guanting Chen,Xiaocheng Li,Chunlin Sun,Hanzhao Wang | ICLR 2024,Poster | As artificial intelligence (AI) systems play an increasingly prominent role in human decision-making, challenges surface in the realm of human-AI interactions. One challenge arises from the suboptimal AI policies due to the inadequate consideration of humans disregarding AI recommendations, as well as the need for AI to provide advice selectively when it is most pertinent. This paper presents a sequential decision-making model that (i) takes into account the human's adherence level (the probability that the human follows/rejects machine advice) and (ii) incorporates a defer option so that the machine can temporarily refrain from making advice. We provide learning algorithms that learn the optimal advice policy and make advice only at critical time stamps. Compared to problem-agnostic reinforcement learning algorithms, our specialized learning algorithms not only enjoy better theoretical convergence properties but also show strong empirical performance. | https://openreview.net/pdf/23fc1fd51c338383a74e3c5989a6dcd7a273a1c0.pdf |
Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs | https://openreview.net/forum?id=uvFhCUPjtI | https://openreview.net/forum?id=uvFhCUPjtI | Anson Bastos,Kuldeep Singh,Abhishek Nadgeri,Manish Singh,Toyotaro Suzumura | ICLR 2024,Poster | We present the Evolving Graph Fourier Transform (EFT), the first invertible spectral transform that captures evolving representations on temporal graphs. We motivate our work by the inadequacy of existing methods for capturing the evolving graph spectra, which are also computationally expensive due to the temporal aspect along with the graph vertex domain. We view the problem as an optimization over the Laplacian of the continuous time dynamic graph. Additionally, we propose pseudo-spectrum relaxations that decompose the transformation process, making it highly computationally efficient. The EFT method adeptly captures the evolving graph's structural and positional properties, making it effective for downstream tasks on evolving graphs. Hence, as a reference implementation, we develop a simple neural model induced with \eft for capturing evolving graph spectra. We empirically validate our theoretical findings on a number of large-scale and standard temporal graph benchmarks and demonstrate that our model achieves state-of-the-art performance. | https://openreview.net/pdf/5386623c7dfc54fe602556b341b906eb0ec58d06.pdf |
Cycle Consistency Driven Object Discovery | https://openreview.net/forum?id=f1xnBr4WD6 | https://openreview.net/forum?id=f1xnBr4WD6 | Aniket Rajiv Didolkar,Anirudh Goyal,Yoshua Bengio | ICLR 2024,Poster | Developing deep learning models that effectively learn object-centric representations, akin to human cognition, remains a challenging task. Existing approaches facilitate object discovery by representing objects as fixed-size vectors, called ``slots'' or ``object files''. While these approaches have shown promise in certain scenarios, they still exhibit certain limitations. First, they rely on architectural priors which can be unreliable and usually require meticulous engineering to identify the correct objects. Second, there has been a notable gap in investigating the practical utility of these representations in downstream tasks. To address the first limitation, we introduce a method that explicitly optimizes the constraint that each object in a scene should be associated with a distinct slot. We formalize this constraint by introducing consistency objectives which are cyclic in nature. By integrating these consistency objectives into various existing slot-based object-centric methods, we showcase substantial improvements in object-discovery performance. These enhancements consistently hold true across both synthetic and real-world scenes, underscoring the effectiveness and adaptability of the proposed approach. To tackle the second limitation, we apply the learned object-centric representations from the proposed method to two downstream reinforcement learning tasks, demonstrating considerable performance enhancements compared to conventional slot-based and monolithic representation learning methods. Our results suggest that the proposed approach not only improves object discovery, but also provides richer features for downstream tasks. | https://openreview.net/pdf/18ace982ecbf580ad919f876edd9b3a6e1652550.pdf |
Sufficient conditions for offline reactivation in recurrent neural networks | https://openreview.net/forum?id=RVrINT6MT7 | https://openreview.net/forum?id=RVrINT6MT7 | Nanda H Krishna,Colin Bredenberg,Daniel Levenstein,Blake Aaron Richards,Guillaume Lajoie | ICLR 2024,Poster | During periods of quiescence, such as sleep, neural activity in many brain circuits resembles that observed during periods of task engagement. However, the precise conditions under which task-optimized networks can autonomously reactivate the same network states responsible for online behavior is poorly understood. In this study, we develop a mathematical framework that outlines sufficient conditions for the emergence of neural reactivation in circuits that encode features of smoothly varying stimuli. We demonstrate mathematically that noisy recurrent networks optimized to track environmental state variables using change-based sensory information naturally develop denoising dynamics, which, in the absence of input, cause the network to revisit state configurations observed during periods of online activity. We validate our findings using numerical experiments on two canonical neuroscience tasks: spatial position estimation based on self-motion cues, and head direction estimation based on angular velocity cues. Overall, our work provides theoretical support for modeling offline reactivation as an emergent consequence of task optimization in noisy neural circuits. | https://openreview.net/pdf/70a1d8b98f869d61b787855ba26e719054884ab3.pdf |
Forward Learning of Graph Neural Networks | https://openreview.net/forum?id=Abr7dU98ME | https://openreview.net/forum?id=Abr7dU98ME | Namyong Park,Xing Wang,Antoine Simoulin,Shuai Yang,Grey Yang,Ryan A. Rossi,Puja Trivedi,Nesreen K. Ahmed | ICLR 2024,Poster | Graph neural networks (GNNs) have achieved remarkable success across a wide range of applications, such as recommendation, drug discovery, and question answering. Behind the success of GNNs lies the backpropagation (BP) algorithm, which is the de facto standard for training deep neural networks (NNs). However, despite its effectiveness, BP imposes several constraints, which are not only biologically implausible, but also limit the scalability, parallelism, and flexibility in learning NNs. Examples of such constraints include storage of neural activities computed in the forward pass for use in the subsequent backward pass, and the dependence of parameter updates on non-local signals. To address these limitations, the forward-forward algorithm (FF) was recently proposed as an alternative to BP in the image classification domain, which trains NNs by performing two forward passes over positive and negative data. Inspired by this advance, we propose ForwardGNN in this work, a new forward learning procedure for GNNs, which avoids the constraints imposed by BP via an effective layer-wise local forward training. ForwardGNN extends the original FF to deal with graph data and GNNs, and makes it possible to operate without generating negative inputs (hence no longer forward-forward). Further, ForwardGNN enables each layer to learn from both the bottom-up and top-down signals without relying on the backpropagation of errors. Extensive experiments on real-world datasets show the effectiveness and generality of the proposed forward graph learning framework. We release our code at https://github.com/facebookresearch/forwardgnn. | https://openreview.net/pdf/78ce77aec3cb18418df9c216e801999677415163.pdf |
Curriculum reinforcement learning for quantum architecture search under hardware errors | https://openreview.net/forum?id=rINBD8jPoP | https://openreview.net/forum?id=rINBD8jPoP | Yash J. Patel,Akash Kundu,Mateusz Ostaszewski,Xavier Bonet-Monroig,Vedran Dunjko,Onur Danaci | ICLR 2024,Poster | The key challenge in the noisy intermediate-scale quantum era is finding useful circuits compatible with current device limitations.
Variational quantum algorithms (VQAs) offer a potential solution by fixing the circuit architecture and optimizing individual gate parameters in an external loop. However, parameter optimization can become intractable, and the overall performance of the algorithm depends heavily on the initially chosen circuit architecture. Several quantum architecture search (QAS) algorithms have been developed to design useful circuit architectures automatically. In the case of parameter optimization alone, noise effects have been observed to dramatically influence the performance of the optimizer and final outcomes, which is a key line of study. However, the effects of noise on the architecture search, which could be just as critical, are poorly understood. This work addresses this gap by introducing a curriculum-based reinforcement learning QAS (CRLQAS) algorithm designed to tackle challenges in realistic VQA deployment. The algorithm incorporates (i) a 3D architecture encoding and restrictions on environment dynamics to explore the search space of possible circuits efficiently, (ii) an episode halting scheme to steer the agent to find shorter circuits, and (iii) a novel variant of simultaneous perturbation stochastic approximation as an optimizer for faster convergence. To facilitate studies, we developed an optimized simulator for our algorithm, significantly improving computational efficiency in simulating noisy quantum circuits by employing the Pauli-transfer matrix formalism in the Pauli-Liouville basis. Numerical experiments focusing on quantum chemistry tasks demonstrate that CRLQAS outperforms existing QAS algorithms across several metrics in both noiseless and noisy environments. | https://openreview.net/pdf/e6b75e0b94d3c31e7f0e5d1c7d11b4ccb1aca361.pdf |
Does CLIP’s generalization performance mainly stem from high train-test similarity? | https://openreview.net/forum?id=tnBaiidobu | https://openreview.net/forum?id=tnBaiidobu | Prasanna Mayilvahanan,Thaddäus Wiedemer,Evgenia Rusak,Matthias Bethge,Wieland Brendel | ICLR 2024,Poster | Foundation models like CLIP are trained on hundreds of millions of samples and effortlessly generalize to new tasks and inputs. Out of the box, CLIP shows stellar zero-shot and few-shot capabilities on a wide range of out-of-distribution (OOD) benchmarks, which prior works attribute mainly to today's large and comprehensive training dataset (like LAION). However, it is questionable how meaningful terms like out-of-distribution generalization are for CLIP as it seems likely that web-scale datasets like LAION simply contain many samples that are similar to common OOD benchmarks originally designed for ImageNet. To test this hypothesis, we retrain CLIP on pruned LAION splits that replicate ImageNet’s train-test similarity with respect to common OOD benchmarks. While we observe a performance drop on some benchmarks, surprisingly, CLIP’s overall performance remains high. This shows that high train-test similarity is insufficient to explain CLIP’s OOD performance, and other properties of the training data must drive CLIP to learn more generalizable representations. Additionally, by pruning data points that are dissimilar to the OOD benchmarks, we uncover a 100M split of LAION (¼ of its original size) on which CLIP can be trained to match its original OOD performance. | https://openreview.net/pdf/914ba616ab5450c89e489fa002bc6f6587152c84.pdf |
Unified Projection-Free Algorithms for Adversarial DR-Submodular Optimization | https://openreview.net/forum?id=H4A9e8HvIn | https://openreview.net/forum?id=H4A9e8HvIn | Mohammad Pedramfar,Yididiya Y. Nadew,Christopher John Quinn,Vaneet Aggarwal | ICLR 2024,Poster | This paper introduces unified projection-free Frank-Wolfe type algorithms for adversarial continuous DR-submodular optimization, spanning scenarios such as full information and (semi-)bandit feedback, monotone and non-monotone functions, different constraints, and types of stochastic queries. For every problem considered in the non-monotone setting, the proposed algorithms are either the first with proven sub-linear $\alpha$-regret bounds or have better $\alpha$-regret bounds than the state of the art, where $\alpha$ is a corresponding approximation bound in the offline setting. In the monotone setting, the proposed approach gives state-of-the-art sub-linear $\alpha$-regret bounds among projection-free algorithms in 7 of the 8 considered cases while matching the result of the remaining case. Additionally, this paper addresses semi-bandit and bandit feedback for adversarial DR-submodular optimization, advancing the understanding of this optimization area. | https://openreview.net/pdf/c8701ec65ea9e2a9ef05a94e26ebcad8d0d3e58f.pdf |
When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method | https://openreview.net/forum?id=5HCnKDeTws | https://openreview.net/forum?id=5HCnKDeTws | Biao Zhang,Zhongtao Liu,Colin Cherry,Orhan Firat | ICLR 2024,Poster | While large language models (LLMs) often adopt finetuning to unlock their capabilities for downstream applications, our understanding on the inductive biases (especially the scaling properties) of different finetuning methods is still limited. To fill this gap, we conduct systematic experiments studying whether and how different scaling factors, including LLM model size, pretraining data size, new finetuning parameter size and finetuning data size, affect the finetuning performance. We consider two types of finetuning – full-model tuning (FMT) and parameter efficient tuning (PET, including prompt tuning and LoRA), and explore their scaling behaviors in the data-limited regime where the LLM model size substantially outweighs the finetuning data size. Based on two sets of pretrained bilingual LLMs from 1B to 16B and experiments on bilingual machine translation and multilingual summarization benchmarks, we find that 1) LLM finetuning follows a powerbased multiplicative joint scaling law between finetuning data size and each other scaling factor; 2) LLM finetuning benefits more from LLM model scaling than pretraining data scaling, and PET parameter scaling is generally ineffective; and 3) the optimal finetuning method is highly task- and finetuning data-dependent. We hope our findings could shed light on understanding, selecting and developing LLM finetuning methods. | https://openreview.net/pdf/c50285d47fae2fec5f5aa87de6d9e6a921de02b9.pdf |
Learning to design protein-protein interactions with enhanced generalization | https://openreview.net/forum?id=xcMmebCT7s | https://openreview.net/forum?id=xcMmebCT7s | Anton Bushuiev,Roman Bushuiev,Petr Kouba,Anatolii Filkin,Marketa Gabrielova,Michal Gabriel,Jiri Sedlar,Tomas Pluskal,Jiri Damborsky,Stanislav Mazurenko,Josef Sivic | ICLR 2024,Poster | Discovering mutations enhancing protein-protein interactions (PPIs) is critical for advancing biomedical research and developing improved therapeutics. While machine learning approaches have substantially advanced the field, they often struggle to generalize beyond training data in practical scenarios. The contributions of this work are three-fold. First, we construct PPIRef, the largest and non-redundant dataset of 3D protein-protein interactions, enabling effective large-scale learning. Second, we leverage the PPIRef dataset to pre-train PPIformer, a new SE(3)-equivariant model generalizing across diverse protein-binder variants. We fine-tune PPIformer to predict effects of mutations on protein-protein interactions via a thermodynamically motivated adjustment of the pre-training loss function. Finally, we demonstrate the enhanced generalization of our new PPIformer approach by outperforming other state-of-the-art methods on new, non-leaking splits of standard labeled PPI mutational data and independent case studies optimizing a human antibody against SARS-CoV-2 and increasing the thrombolytic activity of staphylokinase. | https://openreview.net/pdf/03bb8ce604b1ff6fb6f9d6504d13322265401a20.pdf |
L2MAC: Large Language Model Automatic Computer for Extensive Code Generation | https://openreview.net/forum?id=EhrzQwsV4K | https://openreview.net/forum?id=EhrzQwsV4K | Samuel Holt,Max Ruiz Luyten,Mihaela van der Schaar | ICLR 2024,Poster | Transformer-based large language models (LLMs) are constrained by the fixed context window of the underlying transformer architecture, hindering their ability to produce long and coherent outputs. Memory-augmented LLMs are a promising solution, but current approaches cannot handle long output generation tasks since they (1) only focus on reading memory and reduce its evolution to the concatenation of new memories or (2) use very specialized memories that cannot adapt to other domains. This paper presents L2MAC, the first practical LLM-based general-purpose stored-program automatic computer (von Neumann architecture) framework, an LLM-based multi-agent system, for long and consistent output generation. Its memory has two components: the instruction registry, which is populated with a prompt program to solve the user-given task, and a file store, which will contain the final and intermediate outputs. Each instruction in turn is executed by a separate LLM agent, whose context is managed by a control unit capable of precise memory reading and writing to ensure effective interaction with the entire file store. These components enable L2MAC to generate extensive outputs, bypassing the constraints of the finite context window while producing outputs that fulfill a complex user-specified task. We empirically demonstrate that L2MAC achieves state-of-the-art performance in generating large codebases for system design tasks, significantly outperforming other coding methods in implementing the detailed user-specified task; we show that L2MAC works for general-purpose extensive text-based tasks, such as writing an entire book; and we provide valuable insights into L2MAC's performance improvement over existing methods. | https://openreview.net/pdf/85b60ddd90be827351cfe094c4e8fbcd35267ffe.pdf |
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models | https://openreview.net/forum?id=c93SBwz1Ma | https://openreview.net/forum?id=c93SBwz1Ma | Zhen Xiang,Fengqing Jiang,Zidi Xiong,Bhaskar Ramasubramanian,Radha Poovendran,Bo Li | ICLR 2024,Poster | Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes. On the other hand, COT prompting also poses new vulnerabilities in the form of backdoor attacks, wherein the model will output unintended malicious content under specific backdoor-triggered conditions during inference. Traditional methods for launching backdoor attacks involve either contaminating the training dataset with backdoored instances or directly manipulating the model parameters during deployment. However, these approaches are not practical for commercial LLMs that typically operate via API access. In this paper, we propose BadChain, the first backdoor attack against LLMs employing COT prompting, which does not require access to the training dataset or model parameters and imposes low computational overhead. BadChain leverages the inherent reasoning capabilities of LLMs by inserting a backdoor reasoning step into the sequence of reasoning steps of the model output, thereby altering the final response when a backdoor trigger is embedded in the query prompt. In particular, a subset of demonstrations will be manipulated to incorporate a backdoor reasoning step in COT prompting. Consequently, given any query prompt containing the backdoor trigger, the LLM will be misled to output unintended content. Empirically, we show the effectiveness of BadChain for two COT strategies across four LLMs (Llama2, GPT-3.5, PaLM2, and GPT-4) and six complex benchmark tasks encompassing arithmetic, commonsense, and symbolic reasoning. We show that the baseline backdoor attacks designed for simpler tasks such as semantic classification will fail on these complicated tasks. In addition, our findings reveal that LLMs endowed with stronger reasoning capabilities exhibit higher susceptibility to BadChain, exemplified by a high average attack success rate of 97.0\% across the six benchmark tasks on GPT-4. We also demonstrate the interpretability of BadChain by showing that the relationship between the trigger and the backdoor reasoning step can be well-explained based on the output of the backdoored model. Finally, we propose two defenses based on shuffling and demonstrate their overall ineffectiveness against BadChain. Therefore, BadChain remains a severe threat to LLMs, underscoring the urgency for the development of robust and effective future defenses. | https://openreview.net/pdf/f55f665827c60d9ab1815886945cb4b0fcd9b12b.pdf |
NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks | https://openreview.net/forum?id=samyfu6G93 | https://openreview.net/forum?id=samyfu6G93 | Wenxi Wang,Yang Hu,Mohit Tiwari,Sarfraz Khurshid,Kenneth McMillan,Risto Miikkulainen | ICLR 2024,Poster | Propositional satisfiability (SAT) is an NP-complete problem that impacts many
research fields, such as planning, verification, and security. Mainstream modern
SAT solvers are based on the Conflict-Driven Clause Learning (CDCL) algorithm.
Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks
(GNNs). However, so far this approach either has not made solving more effective,
or required substantial GPU resources for frequent online model inferences. Aiming
to make GNN improvements practical, this paper proposes an approach called
NeuroBack, which builds on two insights: (1) predicting phases (i.e., values) of
variables appearing in the majority (or even all) of the satisfying assignments are
essential for CDCL SAT solving, and (2) it is sufficient to query the neural model
only once for the predictions before the SAT solving starts. Once trained, the
offline model inference allows NeuroBack to execute exclusively on the CPU,
removing its reliance on GPU resources. To train NeuroBack, a new dataset called
DataBack containing 120,286 data samples is created. Finally, NeuroBack is implemented
as an enhancement to a state-of-the-art SAT solver called Kissat. As a result,
it allowed Kissat to solve 5.2% more problems on the recent SAT competition
problem set, SATCOMP-2022. NeuroBack therefore shows how machine learning
can be harnessed to improve SAT solving in an effective and practical manner. | https://openreview.net/pdf/68668727b395677105cd2bfd38dc554dc5b67212.pdf |
Group Preference Optimization: Few-Shot Alignment of Large Language Models | https://openreview.net/forum?id=DpFeMH4l8Q | https://openreview.net/forum?id=DpFeMH4l8Q | Siyan Zhao,John Dang,Aditya Grover | ICLR 2024,Poster | Many applications of large language models (LLMs), ranging from chatbots to
creative writing, require nuanced subjective judgments that can differ significantly
across different groups. Existing alignment algorithms can be expensive to align
for each group, requiring prohibitive amounts of group-specific preference data
and computation for real-world use cases. We introduce Group Preference Optimization (GPO), an alignment framework that steers language models to preferences of individual groups in a few-shot manner. In GPO, we augment the base
LLM with an independent transformer module trained to predict the preferences
of a group for the LLM generations. For few-shot learning, we parameterize this
module as an in-context autoregressive transformer and train it via meta-learning
on several groups. We empirically validate the efficacy of GPO through rigorous evaluations using LLMs with varied sizes on three human opinion adaptation tasks. These tasks involve adapting to the preferences of US demographic
groups, global countries, and individual users. Our results demonstrate that GPO
not only aligns models more accurately but also requires fewer group-specific
preferences and less training and inference computing resources, outperforming
existing strategies such as in-context steering and fine-tuning methods. | https://openreview.net/pdf/e743356473984605880070de9402840ffe780599.pdf |
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training | https://openreview.net/forum?id=w3YZ9MSlBu | https://openreview.net/forum?id=w3YZ9MSlBu | Yizhi LI,Ruibin Yuan,Ge Zhang,Yinghao Ma,Xingran Chen,Hanzhi Yin,Chenghao Xiao,Chenghua Lin,Anton Ragni,Emmanouil Benetos,Norbert Gyenge,Roger Dannenberg,Ruibo Liu,Wenhu Chen,Gus Xia,Yemin Shi,Wenhao Huang,Zili Wang,Yike Guo,Jie Fu | ICLR 2024,Poster | Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech.
Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is partially due to the distinctive challenges associated with modelling musical knowledge, particularly tonal and pitched characteristics of music.
To address this research gap, we propose an acoustic **M**usic und**ER**standing model with large-scale self-supervised **T**raining (**MERT**), which incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training.
In our exploration, we identified an effective combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance.
This combination includes an acoustic teacher based on Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT).
Furthermore, we explore a wide range of settings to overcome the instability in acoustic language model pre-training, which allows our designed paradigm to scale from 95M to 330M parameters.
Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attain state-of-the-art (SOTA) overall scores. | https://openreview.net/pdf/b0691ff4b8bef0f41861ce3bd2ea50491707b93c.pdf |
Variance-aware Regret Bounds for Stochastic Contextual Dueling Bandits | https://openreview.net/forum?id=rDH7dIFn20 | https://openreview.net/forum?id=rDH7dIFn20 | Qiwei Di,Tao Jin,Yue Wu,Heyang Zhao,Farzad Farnoud,Quanquan Gu | ICLR 2024,Poster | Dueling bandits is a prominent framework for decision-making involving preferential feedback, a valuable feature that fits various applications involving human interaction, such as ranking, information retrieval, and recommendation systems. While substantial efforts have been made to minimize the cumulative regret in dueling bandits, a notable gap in the current research is the absence of regret bounds that account for the inherent uncertainty in pairwise comparisons between the dueling arms. Intuitively, greater uncertainty suggests a higher level of difficulty in the problem. To bridge this gap, this paper studies the problem of contextual dueling bandits, where the binary comparison of dueling arms is generated from a generalized linear model (GLM). We propose a new SupLinUCB-type algorithm that enjoys computational efficiency and a variance-aware regret bound $\tilde O\big(d\sqrt{\sum_{t=1}^T\sigma_t^2} + d\big)$, where $\sigma_t$ is the variance of the pairwise comparison at round $t$, $d$ is the dimension of the context vectors, and $T$ is the time horizon. Our regret bound naturally aligns with the intuitive expectation — in scenarios where the comparison is deterministic, the algorithm only suffers from an $\tilde O(d)$ regret. We perform empirical experiments on synthetic data to confirm the advantage of our method over previous variance-agnostic algorithms. | https://openreview.net/pdf/4c1a08aad3975e9de1adbba054eea8e3f1287418.pdf |
A Discretization Framework for Robust Contextual Stochastic Optimization | https://openreview.net/forum?id=ueTdErd5Ib | https://openreview.net/forum?id=ueTdErd5Ib | Rares C Cristian,Georgia Perakis | ICLR 2024,Poster | We study contextual stochastic optimization problems. Optimization problems have uncertain parameters stemming from unknown, context-dependent, distributions. Due to the inherent uncertainty in these problems, one is often interested not only in minimizing expected cost, but also to be robust and protect against worst case scenarios. We propose a novel method that combines the learning stage with knowledge of the downstream optimization task. The method prescribes decisions which aim to maximize the likelihood that the cost is below a (user-controlled) threshold. The key idea is (1) to discretize the feasible region into subsets so that the uncertain objective function can be well approximated deterministically within each subset, and (2) devise a secondary optimization problem to prescribe decisions by integrating the individual approximations determined in step (1). We provide theoretical guarantees bounding the underlying regret of decisions proposed by our method. In addition, experimental results demonstrate that our approach is competitive in terms of average regret and yields more robust solutions than other methods proposed in the literature, including up to 20 times lower worst-case cost on a real-world electricity generation problem. | https://openreview.net/pdf/2097641b7d16f85a6d284604d4e690fc32f2e79e.pdf |
Risk Bounds of Accelerated SGD for Overparameterized Linear Regression | https://openreview.net/forum?id=AcoXPIPh4A | https://openreview.net/forum?id=AcoXPIPh4A | Xuheng Li,Yihe Deng,Jingfeng Wu,Dongruo Zhou,Quanquan Gu | ICLR 2024,Poster | Accelerated stochastic gradient descent (ASGD) is a workhorse in deep learning and often achieves better generalization performance than SGD. However, existing optimization theory can only explain the faster convergence of ASGD, but cannot explain its better generalization. In this paper, we study the generalization of ASGD for overparameterized linear regression, which is possibly the simplest setting of learning with overparameterization. We establish an instance-dependent excess risk bound for ASGD within each eigen-subspace of the data covariance matrix. Our analysis shows that (i) ASGD outperforms SGD in the subspace of small eigenvalues, exhibiting a faster rate of exponential decay for bias error, while in the subspace of large eigenvalues, its bias error decays slower than SGD; and (ii) the variance error of ASGD is always larger than that of SGD. Our result suggests that ASGD can outperform SGD when the difference between the initialization and the true weight vector is mostly confined to the subspace of small eigenvalues. Additionally, when our analysis is specialized to linear regression in the strongly convex setting, it yields a tighter bound for bias error than the best-known result. | https://openreview.net/pdf/c07f79889c3181ba8d25abe3732708eda102cceb.pdf |
Task structure and nonlinearity jointly determine learned representational geometry | https://openreview.net/forum?id=k9t8dQ30kU | https://openreview.net/forum?id=k9t8dQ30kU | Matteo Alleman,Jack Lindsey,Stefano Fusi | ICLR 2024,Poster | The utility of a learned neural representation depends on how well its geometry supports performance in downstream tasks. This geometry depends on the structure of the inputs, the structure of the target outputs, and on the architecture of the network. By studying the learning dynamics of networks with one hidden layer, we discovered that the network's activation function has an unexpectedly strong impact on the representational geometry: Tanh networks tend to learn representations that reflect the structure of the target outputs, while ReLU networks retain more information about the structure of the raw inputs. This difference is consistently observed across a broad class of parameterized tasks in which we modulated the degree of alignment between the geometry of the task inputs and that of the task labels. We analyzed the learning dynamics in weight space and show how the differences between the networks with Tanh and ReLU nonlinearities arise from the asymmetric saturation of ReLU, which leads feature neurons to specialize for different regions of input space. Feature neurons in Tanh networks, by contrast, tend to inherit the task label structure. Consequently, when the target outputs are low dimensional, Tanh networks generate neural representations that are more disentangled than those obtained with a ReLU nonlinearity. Our findings shed light on the interplay between input-output geometry, nonlinearity, and learned representations in neural networks. | https://openreview.net/pdf/b8a9d014f343b3669b6457839a9e6aa1801d1174.pdf |
Llemma: An Open Language Model for Mathematics | https://openreview.net/forum?id=4WnqRR915j | https://openreview.net/forum?id=4WnqRR915j | Zhangir Azerbayev,Hailey Schoelkopf,Keiran Paster,Marco Dos Santos,Stephen Marcus McAleer,Albert Q. Jiang,Jia Deng,Stella Biderman,Sean Welleck | ICLR 2024,Poster | We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known openly released models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments. | https://openreview.net/pdf/be75740b06066f002fed0867925b737ec1f8757f.pdf |
Directly Fine-Tuning Diffusion Models on Differentiable Rewards | https://openreview.net/forum?id=1vmSEVL19f | https://openreview.net/forum?id=1vmSEVL19f | Kevin Clark,Paul Vicol,Kevin Swersky,David J. Fleet | ICLR 2024,Poster | We present Direct Reward Fine-Tuning (DRaFT), a simple and effective method for fine-tuning diffusion models to maximize differentiable reward functions, such as scores from human preference models. We first show that it is possible to backpropagate the reward function gradient through the full sampling procedure, and that doing so achieves strong performance on a variety of rewards, outperforming reinforcement learning-based approaches. We then propose more efficient variants of DRaFT: DRaFT-K, which truncates backpropagation to only the last K steps of sampling, and DRaFT-LV, which obtains lower-variance gradient estimates for the case when K=1. We show that our methods work well for a variety of reward functions and can be used to substantially improve the aesthetic quality of images generated by Stable Diffusion 1.4. Finally, we draw connections between our approach and prior work, providing a unifying perspective on the design space of gradient-based fine-tuning algorithms. | https://openreview.net/pdf/4f6a4d187763cd647549b92a33f6f1c84f23bdec.pdf |
Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift | https://openreview.net/forum?id=eoTCKKOgIs | https://openreview.net/forum?id=eoTCKKOgIs | Jiawei Ge,Shange Tang,Jianqing Fan,Cong Ma,Chi Jin | ICLR 2024,Poster | A key challenge of modern machine learning systems is to achieve Out-of-Distribution (OOD) generalization---generalizing to target data whose distribution differs from that of source data. Despite its significant importance, the fundamental question of ``what are the most effective algorithms for OOD generalization'' remains open even under the standard setting of covariate shift.
This paper addresses this fundamental question by proving that, surprisingly, classical Maximum Likelihood Estimation (MLE) purely using source data (without any modification) achieves the *minimax* optimality for covariate shift under the *well-specified* setting. That is, *no* algorithm performs better than MLE in this setting (up to a constant factor), justifying MLE is all you need.
Our result holds for a very rich class of parametric models, and does not require any boundedness condition on the density ratio. We illustrate the wide applicability of our framework by instantiating it to three concrete examples---linear regression, logistic regression, and phase retrieval. This paper further complement the study by proving that, under the *misspecified setting*, MLE is no longer the optimal choice, whereas Maximum Weighted Likelihood Estimator (MWLE) emerges as minimax optimal in certain scenarios. | https://openreview.net/pdf/a7c9e2c8b894e55b7cdff72359975a63bd0b405a.pdf |
Differentiable Learning of Generalized Structured Matrices for Efficient Deep Neural Networks | https://openreview.net/forum?id=pAVJKp3Dvn | https://openreview.net/forum?id=pAVJKp3Dvn | Changwoo Lee,Hun-Seok Kim | ICLR 2024,Poster | This paper investigates efficient deep neural networks (DNNs) to replace dense unstructured weight matrices with structured ones that possess desired properties. The challenge arises because the optimal weight matrix structure in popular neural network models is obscure in most cases and may vary from layer to layer even in the same network. Prior structured matrices proposed for efficient DNNs were mostly hand-crafted without a generalized framework to systematically learn them. To address this issue, we propose a generalized and differentiable framework to learn efficient structures of weight matrices by gradient descent. We first define a new class of structured matrices that covers a wide range of structured matrices in the literature by adjusting the structural parameters. Then, the frequency-domain differentiable parameterization scheme based on the Gaussian-Dirichlet kernel is adopted to learn the structural parameters by proximal gradient descent. On the image and language tasks, our method learns efficient DNNs with structured matrices, achieving lower complexity and/or higher performance than prior approaches that employ low-rank, block-sparse, or block-low-rank matrices. | https://openreview.net/pdf/f49f086736d3cb7e71497a7fe09351bb020f7175.pdf |
A Flexible Generative Model for Heterogeneous Tabular EHR with Missing Modality | https://openreview.net/forum?id=W2tCmRrj7H | https://openreview.net/forum?id=W2tCmRrj7H | Huan He,William hao,Yuanzhe Xi,Yong Chen,Bradley Malin,Joyce Ho | ICLR 2024,Poster | Realistic synthetic electronic health records (EHRs) can be leveraged to acceler- ate methodological developments for research purposes while mitigating privacy concerns associated with data sharing. However, the training of Generative Ad- versarial Networks remains challenging, often resulting in issues like mode col- lapse. While diffusion models have demonstrated progress in generating qual- ity synthetic samples for tabular EHRs given ample denoising steps, their perfor- mance wanes when confronted with missing modalities in heterogeneous tabular EHRs data. For example, some EHRs contain solely static measurements, and some contain only contain temporal measurements, or a blend of both data types. To bridge this gap, we introduce FLEXGEN-EHR– a versatile diffusion model tai- lored for heterogeneous tabular EHRs, equipped with the capability of handling missing modalities in an integrative learning framework. We define an optimal transport module to align and accentuate the common feature space of hetero- geneity of EHRs. We empirically show that our model consistently outperforms existing state-of-the-art synthetic EHR generation methods both in fidelity by up to 3.10% and utility by up to 7.16%. Additionally, we show that our method can be successfully used in privacy-sensitive settings, where the original patient-level data cannot be shared. | https://openreview.net/pdf/4a599e95342021d11914b38bfcbd9f85362f685b.pdf |
Designing Skill-Compatible AI: Methodologies and Frameworks in Chess | https://openreview.net/forum?id=79rfgv3jw4 | https://openreview.net/forum?id=79rfgv3jw4 | Karim Hamade,Reid McIlroy-Young,Siddhartha Sen,Jon Kleinberg,Ashton Anderson | ICLR 2024,Poster | Powerful artificial intelligence systems are often used in settings where they must interact with agents that are computationally much weaker, for example when they work alongside humans or operate in complex environments where some tasks are handled by algorithms, heuristics, or other entities of varying computational power. For AI agents to successfully interact in these settings, however, achieving superhuman performance alone is not sufficient; they also need to account for suboptimal actions or idiosyncratic style from their less-skilled counterparts. We propose a formal evaluation framework for assessing the compatibility of near-optimal AI with interaction partners who may have much lower levels of skill; we use popular collaborative chess variants as model systems to study and develop AI agents that can successfully interact with lower-skill entities. Traditional chess engines designed to output near-optimal moves prove to be inadequate partners when paired with engines of various lower skill levels in this domain, as they are not designed to consider the presence of other agents. We contribute three methodologies to explicitly create skill-compatible AI agents in complex decision-making settings, and two chess game frameworks designed to foster collaboration between powerful AI agents and less-skilled partners. On these frameworks, our agents outperform state-of-the-art chess AI (based on AlphaZero) despite being weaker in conventional chess, demonstrating that skill-compatibility is a tangible trait that is qualitatively and measurably distinct from raw performance. Our evaluations further explore and clarify the mechanisms by which our agents achieve skill-compatibility. | https://openreview.net/pdf/8b21e7de733143ca1fd7e2af2fc6d9be736bd73b.pdf |
Tree Search-Based Policy Optimization under Stochastic Execution Delay | https://openreview.net/forum?id=RaqZX9LSGA | https://openreview.net/forum?id=RaqZX9LSGA | David Valensi,Esther Derman,Shie Mannor,Gal Dalal | ICLR 2024,Poster | The standard formulation of Markov decision processes (MDPs) assumes that the agent's decisions are executed immediately.
However, in numerous realistic applications such as robotics or healthcare, actions are performed with a delay whose value can even be stochastic. In this work, we introduce stochastic delayed execution MDPs, a new formalism addressing random delays without resorting to state augmentation. We show that given observed delay values, it is sufficient to perform a policy search in the class of Markov policies in order to reach optimal performance, thus extending the deterministic fixed delay case. Armed with this insight, we devise DEZ, a model-based algorithm that optimizes over the class of Markov policies. DEZ leverages Monte-Carlo tree search similar to its non-delayed variant EfficientZero to accurately infer future states from the action queue. Thus, it handles delayed execution while preserving the sample efficiency of EfficientZero. Through empirical analysis, we stress that none of the prior benchmarks consistently outperforms others across different delays. We demonstrate that our algorithm surpasses all benchmark methods in Atari games when dealing with constant or stochastic delays. The code is available at \url{https://github.com/davidva1/Delayed-EZ}. | https://openreview.net/pdf/8830f6b3cafc9913288b14d81260c6f589144619.pdf |
Context-Aware Meta-Learning | https://openreview.net/forum?id=lJYAkDVnRU | https://openreview.net/forum?id=lJYAkDVnRU | Christopher Fifty,Dennis Duan,Ronald Guenther Junkins,Ehsan Amid,Jure Leskovec,Christopher Re,Sebastian Thrun | ICLR 2024,Poster | Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning. However, visual models trained to detect new objects during inference have been unable to replicate this ability, and instead either perform poorly or require meta-training and/or fine-tuning on similar objects. In this work, we propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning. Our approach leverages a frozen pre-trained feature extractor, and analogous to in-context learning, recasts meta-learning as sequence modeling over datapoints with known labels and a test datapoint with an unknown label. On 8 out of 11 meta-learning benchmarks, our approach---without meta-training or fine-tuning---exceeds or matches the state-of-the-art algorithm, P>M>F, which is meta-trained on these benchmarks. | https://openreview.net/pdf/2b890b5667cfd1af349d0022584e53a3221e408c.pdf |
Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain | https://openreview.net/forum?id=caW7LdAALh | https://openreview.net/forum?id=caW7LdAALh | Marcus J. Min,Yangruibo Ding,Luca Buratti,Saurabh Pujar,Gail Kaiser,Suman Jana,Baishakhi Ray | ICLR 2024,Poster | Code Large Language Models (Code LLMs) are being increasingly employed in real-life applications, so evaluating them is critical. While the conventional accuracy evaluates the performance of Code LLMs on a set of individual tasks, their self-consistency across different tasks is overlooked. Intuitively, a trustworthy model should be self-consistent when generating natural language specifications for its own code and generating code for its own specifications. Failure to preserve self-consistency reveals a lack of understanding of the shared semantics underlying natural language and programming language, and therefore undermines the trustworthiness of a model. In this paper, we first formally define the self-consistency of Code LLMs and then design a framework, IdentityChain, which effectively and efficiently evaluates the self-consistency and conventional accuracy of a model at the same time. We study eleven Code LLMs and show that they fail to preserve self-consistency, which is indeed a distinct aspect from conventional accuracy. Furthermore, we show that IdentityChain can be used as a model debugging tool to expose weaknesses of Code LLMs by demonstrating three major weaknesses that we identify in current models using IdentityChain. Our code is available at https://github.com/marcusm117/IdentityChain. | https://openreview.net/pdf/cc6882135f5b0d5720cfc7705e5db5b102b72cbf.pdf |
Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting | https://openreview.net/forum?id=fjpfCOV4ru | https://openreview.net/forum?id=fjpfCOV4ru | Aleksei Ustimenko,Aleksandr Beznosikov | ICLR 2024,Poster | In this work, we consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Stochastic Differential Equation. The chain we study is a unified framework for theoretical analysis. It comes with almost arbitrary isotropic and state-dependent noise instead of normal and state-independent one as in most related papers. Moreover, in our chain the drift and diffusion coefficient can be inexact in order to cover wide range of applications as Stochastic Gradient Langevin Dynamics, sampling, Stochastic Gradient Descent or Stochastic Gradient Boosting. We prove the bound in $\mathcal{W}_{2}$-distance between the laws of our Ito chain and corresponding differential equation. These results improve or cover most of the known estimates. And for some particular cases, our analysis is the first. | https://openreview.net/pdf/6096e5c08f636d6144a7e20b80523e0168fe0a4a.pdf |
Modeling Boundedly Rational Agents with Latent Inference Budgets | https://openreview.net/forum?id=W3VsHuga3j | https://openreview.net/forum?id=W3VsHuga3j | Athul Paul Jacob,Abhishek Gupta,Jacob Andreas | ICLR 2024,Poster | We study the problem of modeling a population of agents pursuing unknown goals subject to unknown computational constraints. In standard models of bounded rationality, sub-optimal decision-making is simulated by adding homoscedastic noise to optimal decisions rather than actually simulating constrained inference. In this work, we introduce a latent inference budget model (L-IBM) that models these constraints explicitly, via a latent variable (inferred jointly with a model of agents’ goals) that controls the runtime of an iterative inference algorithm. L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors. In three modeling tasks—inferring navigation goals from routes, inferring communicative intents from human utterances, and predicting next moves in human chess games—we show that L-IBMs match or outperforms Boltzmann models of decision-making under uncertainty. Moreover, the inferred inference budgets are themselves meaningful, efficient to compute, and correlated with measures of player skill, partner skill and task difficulty. | https://openreview.net/pdf/b6fcabc7a4521fa9b8ff72f6d09fbafc1e118951.pdf |
The Effectiveness of Random Forgetting for Robust Generalization | https://openreview.net/forum?id=MEGQGNUfPx | https://openreview.net/forum?id=MEGQGNUfPx | Vijaya Raghavan T Ramkumar,Bahram Zonooz,Elahe Arani | ICLR 2024,Poster | Deep neural networks are susceptible to adversarial attacks, which can compromise their performance and accuracy. Adversarial Training (AT) has emerged as a popular approach for protecting neural networks against such attacks. However, a key challenge of AT is robust overfitting, where the network's robust performance on test data deteriorates with further training, thus hindering generalization. Motivated by the concept of active forgetting in the brain, we introduce a novel learning paradigm called "Forget to Mitigate Overfitting (FOMO)". FOMO alternates between the forgetting phase, which randomly forgets a subset of weights and regulates the model's information through weight reinitialization, and the relearning phase, which emphasizes learning generalizable features. Our experiments on benchmark datasets and adversarial attacks show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy while improving the state-of-the-art robustness. Furthermore, FOMO provides a better trade-off between the standard and robust accuracy outperforming baseline adversarial methods. Finally, our framework is robust to AutoAttacks and increases generalization in many real-world scenarios. | https://openreview.net/pdf/23ffd81fb8a0b8a1c30e255410b712c66490ed39.pdf |
Diffeomorphic Mesh Deformation via Efficient Optimal Transport for Cortical Surface Reconstruction | https://openreview.net/forum?id=gxhRR8vUQb | https://openreview.net/forum?id=gxhRR8vUQb | Thanh Tung Le,Khai Nguyen,shanlin sun,Kun Han,Nhat Ho,Xiaohui Xie | ICLR 2024,Poster | Mesh deformation plays a pivotal role in many 3D vision tasks including dynamic simulations, rendering, and reconstruction. However, defining an efficient discrepancy between predicted and target meshes remains an open problem. A prevalent approach in current deep learning is the set-based approach which measures the discrepancy between two surfaces by comparing two randomly sampled point-clouds from the two meshes with Chamfer pseudo-distance. Nevertheless, the set-based approach still has limitations such as lacking a theoretical guarantee for choosing the number of points in sampled point-clouds, and the pseudo-metricity and the quadratic complexity of the Chamfer divergence. To address these issues, we propose a novel metric for learning mesh deformation. The metric is defined by sliced Wasserstein distance on meshes represented as probability measures that generalize the set-based approach. By leveraging probability measure space, we gain flexibility in encoding meshes using diverse forms of probability measures, such as continuous, empirical, and discrete measures via \textit{varifold} representation. After having encoded probability measures, we can compare meshes by using the sliced Wasserstein distance which is an effective optimal transport distance with linear computational complexity and can provide a fast statistical rate for approximating the surface of meshes. To the end, we employ a neural ordinary differential equation (ODE) to deform the input surface into the target shape by modeling the trajectories of the points on the surface. Our experiments on cortical surface reconstruction demonstrate that our approach surpasses other competing methods in multiple datasets and metrics. | https://openreview.net/pdf/57b2a95817608783ca78a3f2e620841dc4facc1e.pdf |
Lie Group Decompositions for Equivariant Neural Networks | https://openreview.net/forum?id=p34fRKp8qA | https://openreview.net/forum?id=p34fRKp8qA | Mircea Mironenco,Patrick Forré | ICLR 2024,Poster | Invariance and equivariance to geometrical transformations have proven to be very useful inductive biases when training (convolutional) neural network models, especially in the low-data regime.
Much work has focused on the case where the symmetry group employed is compact or abelian, or both.
Recent work has explored enlarging the class of transformations used to the case of Lie groups, principally through the use of their Lie algebra, as well as the group exponential and logarithm maps.
The applicability of such methods to larger transformation groups is limited by the fact that depending on the group of interest $G$, the exponential map may not be surjective.
Further limitations are encountered when $G$ is neither compact nor abelian.
Using the structure and geometry of Lie groups and their homogeneous spaces, we present a framework by which it is possible to work with such groups primarily focusing on the Lie groups $G = \textnormal{GL}^{+}(n, \mathbb{R})$ and $G = \textnormal{SL}(n, \mathbb{R})$, as well as their representation as affine transformations $\mathbb{R}^{n} \rtimes G$.
Invariant integration as well as a global parametrization is realized by decomposing the "larger" groups into subgroups and submanifolds which can be handled individually.
Under this framework, we show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations.
We evaluate the robustness and out-of-distribution generalisation capability of our model on the standard affine-invariant benchmark classification task, where we outperform all previous equivariant models as well as all Capsule Network proposals. | https://openreview.net/pdf/a8637c53d3fd6b8302d16ec775feadb732c3d649.pdf |
Efficient Heterogeneous Meta-Learning via Channel Shuffling Modulation | https://openreview.net/forum?id=QiJuMJl0QS | https://openreview.net/forum?id=QiJuMJl0QS | Minh Hoang,Carl Kingsford | ICLR 2024,Poster | We tackle the problem of meta-learning across heterogenous tasks. This problem seeks to extract and generalize transferable meta-knowledge through streaming task sets from a multi-modal task distribution. The extracted meta-knowledge can be used to create predictors for new tasks using a small number of labeled samples. Most meta-learning methods assume a homogeneous task distribution, thus limiting their generalization capacity when handling multi-modal task distributions. Recent work has shown that the generalization of meta-learning depends on the similarity of tasks in the training distribution, and this has led to many clustering approaches that aim to detect homogeneous clusters of tasks. However, these methods suffer from a significant increase in parameter complexity. To overcome this weakness, we propose a new heterogeneous meta-learning strategy that efficiently captures the multi-modality of the task distribution via modulating the routing between convolution channels in the network, instead of directly modulating the network weights. This new mechanism can be cast as a permutation learning problem. We further introduce a novel neural permutation layer based on the classical Benes routing network, which has sub-quadratic parameter complexity in the total number of channels, as compared to the quadratic complexity of the state-of-the-art Gumbel-Sinkhorn layer. We demonstrate our approach on various multi-modal meta-learning benchmarks, showing that our framework outperforms previous methods in both generalization accuracy and convergence speed. | https://openreview.net/pdf/2372b26cc6c55bb1b97f26c7f0b9cc2e897ede18.pdf |
To Grok or not to Grok: Disentangling Generalization and Memorization on Corrupted Algorithmic Datasets | https://openreview.net/forum?id=UHjE5v5MB7 | https://openreview.net/forum?id=UHjE5v5MB7 | Darshil Doshi,Aritra Das,Tianyu He,Andrey Gromov | ICLR 2024,Poster | Robust generalization is a major challenge in deep learning, particularly when the number of trainable parameters is very large. In general, it is very difficult to know if the network has memorized a particular set of examples or understood the underlying rule (or both). Motivated by this challenge, we study an interpretable model where generalizing representations are understood analytically, and are easily distinguishable from the memorizing ones. Namely, we consider multi-layer perceptron (MLP) and Transformer architectures trained on modular arithmetic tasks, where ($\xi \cdot 100\\%$) of labels are corrupted (*i.e.* some results of the modular operations in the training set are incorrect). We show that (i) it is possible for the network to memorize the corrupted labels *and* achieve $100\\%$ generalization at the same time; (ii) the memorizing neurons can be identified and pruned, lowering the accuracy on corrupted data and improving the accuracy on uncorrupted data; (iii) regularization methods such as weight decay, dropout and BatchNorm force the network to ignore the corrupted data during optimization, and achieve $100\\%$ accuracy on the uncorrupted dataset; and (iv) the effect of these regularization methods is ("mechanistically") interpretable: weight decay and dropout force all the neurons to learn generalizing representations, while BatchNorm de-amplifies the output of memorizing neurons and amplifies the output of the generalizing ones. Finally, we show that in the presence of regularization, the training dynamics involves two consecutive stages: first, the network undergoes *grokking* dynamics reaching high train *and* test accuracy; second, it unlearns the memorizing representations, where the train accuracy suddenly jumps from $100\\%$ to $100 (1-\xi)\\%$. | https://openreview.net/pdf/b5a7d2b34d54912009b544ef9c33ac0e3d8050a2.pdf |
VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections | https://openreview.net/forum?id=SUUrkC3STJ | https://openreview.net/forum?id=SUUrkC3STJ | Dongqi Fu,Zhigang Hua,Yan Xie,Jin Fang,Si Zhang,Kaan Sancak,Hao Wu,Andrey Malevich,Jingrui He,Bo Long | ICLR 2024,Poster | Graph transformer has been proven as an effective graph learning method for its adoption of attention mechanism that is capable of capturing expressive representations from complex topological and feature information of graphs. Graph transformer conventionally performs dense attention (or global attention) for every pair of nodes to learn node representation vectors, resulting in quadratic computational costs that are unaffordable for large-scale graph data. Therefore, mini-batch training for graph transformers is a promising direction, but limited samples in each mini-batch can not support effective dense attention to encode informative representations. Facing this bottleneck, (1) we start by assigning each node a token list that is sampled by personalized PageRank (PPR) and then apply standard multi-head self-attention only on this list to compute its node representations. This PPR tokenization method decouples model training from complex graph topological information and makes heavy feature engineering offline and independent, such that mini-batch training of graph transformers is possible by loading each node's token list in batches. We further prove this PPR tokenization is viable as a graph convolution network with a fixed polynomial filter and jumping knowledge. However, only using personalized PageRank may limit information carried by a token list, which could not support different graph inductive biases for model training. To this end, (2) we rewire graphs by introducing multiple types of virtual connections through structure- and content-based super nodes that enable PPR tokenization to encode local and global contexts, long-range interaction, and heterophilous information into each node's token list, and then formalize our $\underline{\textbf{V}}$irtual $\underline{\textbf{C}}$onnection $\underline{\textbf{R}}$anking based $\underline{\textbf{Graph}}$ Trans$\underline{\textbf{former}}$ (VCR-Graphormer). Overall, VCR-Graphormer needs $O(m+klogk)$ complexity for graph tokenization as compared to $O(n^{3})$ of previous works. The [code](https://github.com/DongqiFu/VCR-Graphormer) is provided. | https://openreview.net/pdf/5914f5217eeed4b267eb84117a931ad9f7dbd355.pdf |
Optimistic Bayesian Optimization with Unknown Constraints | https://openreview.net/forum?id=D4NJFfrqoq | https://openreview.net/forum?id=D4NJFfrqoq | Quoc Phong Nguyen,Wan Theng Ruth Chew,Le Song,Bryan Kian Hsiang Low,Patrick Jaillet | ICLR 2024,Poster | Though some research efforts have been dedicated to constrained Bayesian optimization (BO), there remains a notable absence of a principled approach with a theoretical performance guarantee in the decoupled setting. Such a setting involves independent evaluations of the objective function and constraints at different inputs, and is hence a relaxation of the commonly-studied coupled setting where functions must be evaluated together. As a result, the decoupled setting requires an adaptive selection between evaluating either the objective function or a constraint, in addition to selecting an input (in the coupled setting). This paper presents a novel constrained BO algorithm with a provable performance guarantee that can address the above relaxed setting. Specifically, it considers the fundamental trade-off between exploration and exploitation in constrained BO, and, interestingly, affords a noteworthy connection to active learning. The performance of our proposed algorithms is also empirically evaluated using several synthetic and real-world optimization problems. | https://openreview.net/pdf/b9f8e9fc8f8a6b5a79bf0426ca34210e1c6552ff.pdf |
DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness | https://openreview.net/forum?id=m7aPLHwsLr | https://openreview.net/forum?id=m7aPLHwsLr | Shoumik Saha,Wenxiao Wang,Yigitcan Kaya,Soheil Feizi,Tudor Dumitras | ICLR 2024,Poster | Machine Learning (ML) models have been utilized for malware detection for over two decades. Consequently, this ignited an ongoing arms race between malware authors and antivirus systems, compelling researchers to propose defenses for malware-detection models against evasion attacks. However, most if not all existing defenses against evasion attacks suffer from sizable performance degradation and/or can defend against only specific attacks, which makes them less practical in real-world settings. In this work, we develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the *de-randomized smoothing* technique for the domain of malware detection. Specifically, we propose a *window ablation* scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables. After showing how DRSM is theoretically robust against attacks with contiguous adversarial bytes, we verify its performance and certified robustness experimentally, where we observe only marginal accuracy drops as the cost of robustness. To our knowledge, we are the first to offer certified robustness in the realm of static detection of malware executables. More surprisingly, through evaluating DRSM against $9$ empirical attacks of different types, we observe that the proposed defense is empirically robust to some extent against a diverse set of attacks, some of which even fall out of the scope of its original threat model. In addition, we collected $15.5K$ recent benign raw executables from diverse sources, which will be made public as a dataset called PACE (Publicly Accessible Collection(s) of Executables) to alleviate the scarcity of publicly available benign datasets for studying malware detection and provide future research with more representative data of the time. Our code and dataset are available at - https://github.com/ShoumikSaha/DRSM | https://openreview.net/pdf/f16a6785a007c48f8e9c460b978613135d30e8c3.pdf |
On the Variance of Neural Network Training with respect to Test Sets and Distributions | https://openreview.net/forum?id=pEGSdJu52I | https://openreview.net/forum?id=pEGSdJu52I | Keller Jordan | ICLR 2024,Poster | Neural network trainings are stochastic, causing the performance of trained networks to vary across repeated runs of training.
We contribute the following results towards understanding this variation.
(1) Despite having significant variance on their test-sets, we demonstrate that standard CIFAR-10 and ImageNet trainings have little variance in their performance on the test-distributions from which their test-sets are sampled.
(2) We introduce the independent errors assumption and show that it suffices to recover the structure and variance of the empirical accuracy distribution across repeated runs of training.
(3) We prove that test-set variance is unavoidable given the observation that ensembles of identically trained networks are calibrated (Jiang et al., 2021), and demonstrate that the variance of binary classification trainings closely follows a simple formula based on the error rate and number of test examples.
(4) We conduct preliminary studies of data augmentation, learning rate, finetuning instability and distribution-shift through the lens of variance between runs. | https://openreview.net/pdf/2de59bd3955e067d8a7c6a4eee59896af773422d.pdf |
Large Language Models to Enhance Bayesian Optimization | https://openreview.net/forum?id=OOxotBmGol | https://openreview.net/forum?id=OOxotBmGol | Tennison Liu,Nicolás Astorga,Nabeel Seedat,Mihaela van der Schaar | ICLR 2024,Poster | Bayesian optimization (BO) is a powerful approach for optimizing complex and expensive-to-evaluate black-box functions. Its importance is underscored in many applications, notably including hyperparameter tuning, but its efficacy depends on efficiently balancing exploration and exploitation. While there has been substantial progress in BO methods, striking this balance remains a delicate process. In this light, we present \texttt{LLAMBO}, a novel approach that integrates the capabilities of Large Language Models (LLM) within BO. At a high level, we frame the BO problem in natural language, enabling LLMs to iteratively \emph{propose} and \emph{evaluate} promising solutions conditioned on historical evaluations. More specifically, we explore how combining contextual understanding, few-shot learning proficiency, and domain knowledge of LLMs can improve model-based BO. Our findings illustrate that \texttt{LLAMBO} is effective at zero-shot warmstarting, and enhances surrogate modeling and candidate sampling, especially in the early stages of search when observations are sparse. Our approach is performed in context and does not require LLM finetuning. Additionally, it is modular by design, allowing individual components to be integrated into existing BO frameworks, or function cohesively as an end-to-end method. We empirically validate \texttt{LLAMBO}'s efficacy on the problem of hyperparameter tuning, highlighting strong empirical performance across a range of diverse benchmarks, proprietary, and synthetic tasks. | https://openreview.net/pdf/b22b6ea9b84dd474cf4871acc78a784c45bac294.pdf |
Towards Identifiable Unsupervised Domain Translation: A Diversified Distribution Matching Approach | https://openreview.net/forum?id=55uj7mU7Cv | https://openreview.net/forum?id=55uj7mU7Cv | Sagar Shrestha,Xiao Fu | ICLR 2024,Poster | Unsupervised domain translation (UDT) aims to find functions that convert samples from one domain (e.g., sketches) to another domain (e.g., photos) without changing the high-level semantic meaning (also referred to as "content"). The translation functions are often sought by probability distribution matching of the transformed source domain and target domain. CycleGAN stands as arguably the most representative approach among this line of work. However, it was noticed in the literature that CycleGAN and variants could fail to identify the desired translation functions and produce content-misaligned translations.
This limitation arises due to the presence of multiple translation functions---referred to as ``measure-preserving automorphism" (MPA)---in the solution space of the learning criteria. Despite awareness of such identifiability issues, solutions have remained elusive. This study delves into the core identifiability inquiry and introduces an MPA elimination theory. Our analysis shows that MPA is unlikely to exist, if multiple pairs of diverse cross-domain conditional distributions are matched by the learning function.
Our theory leads to a UDT learner using distribution matching over auxiliary variable-induced subsets of the domains---other than over the entire data domains as in the classical approaches. The proposed framework is the first to rigorously establish translation identifiability under reasonable UDT settings, to our best knowledge.
Experiments corroborate with our theoretical claims. | https://openreview.net/pdf/310f0697e921696dbb723b2f938345c38befe7d6.pdf |
SineNet: Learning Temporal Dynamics in Time-Dependent Partial Differential Equations | https://openreview.net/forum?id=LSYhE2hLWG | https://openreview.net/forum?id=LSYhE2hLWG | Xuan Zhang,Jacob Helwig,Yuchao Lin,Yaochen Xie,Cong Fu,Stephan Wojtowytsch,Shuiwang Ji | ICLR 2024,Poster | We consider using deep neural networks to solve time-dependent partial differential equations (PDEs), where multi-scale processing is crucial for modeling complex, time-evolving dynamics. While the U-Net architecture with skip connections is commonly used by prior studies to enable multi-scale processing, our analysis shows that the need for features to evolve across layers results in temporally misaligned features in skip connections, which limits the model’s performance. To address this limitation, we propose SineNet, consisting of multiple sequentially connected U-shaped network blocks, referred to as waves. In SineNet, high-resolution features are evolved progressively through multiple stages, thereby reducing the amount of misalignment within each stage. We furthermore analyze the role of skip connections in enabling both parallel and sequential processing of multi-scale information. Our method is rigorously tested on multiple PDE datasets, including the Navier-Stokes equations and shallow water equations, showcasing the advantages of our proposed approach over conventional U-Nets with a comparable parameter budget. We further demonstrate that increasing the number of waves in SineNet while maintaining the same number of parameters leads to a monotonically improved performance. The results highlight the effectiveness of SineNet and the potential of our approach in advancing the state-of-the-art in neural PDE solver design. Our code is available as part of AIRS (https://github.com/divelab/AIRS). | https://openreview.net/pdf/e09653bff41011adca550b8b7e3e27bd1e59e9f1.pdf |
GNNBoundary: Towards Explaining Graph Neural Networks through the Lens of Decision Boundaries | https://openreview.net/forum?id=WIzzXCVYiH | https://openreview.net/forum?id=WIzzXCVYiH | Xiaoqi Wang,Han Wei Shen | ICLR 2024,Poster | While Graph Neural Networks (GNNs) have achieved remarkable performance on various machine learning tasks on graph data, they also raised questions regarding their transparency and interpretability. Recently, there have been extensive research efforts to explain the decision-making process of GNNs. These efforts often focus on explaining why a certain prediction is made for a particular instance, or what discriminative features the GNNs try to detect for each class. However, to the best of our knowledge, there is no existing study on understanding the decision boundaries of GNNs, even though the decision-making process of GNNs is directly determined by the decision boundaries. To bridge this research gap, we propose a model-level explainability method called GNNBoundary, which attempts to gain deeper insights into the decision boundaries of graph classifiers. Specifically, we first develop an algorithm to identify the pairs of classes whose decision regions are adjacent. For an adjacent class pair, the near-boundary graphs between them are effectively generated by optimizing a novel objective function specifically designed for boundary graph generation. Thus, by analyzing the nearboundary graphs, the important characteristics of decision boundaries can be uncovered. To evaluate the efficacy of GNNBoundary, we conduct experiments on both synthetic and public real-world datasets. The results demonstrate that, via the analysis of faithful near-boundary graphs generated by GNNBoundary, we can thoroughly assess the robustness and generalizability of the explained GNNs. The official implementation can be found at https://github.com/yolandalalala/GNNBoundary. | https://openreview.net/pdf/4c88af58e7fea802a541ca1a850768e448700110.pdf |
Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN | https://openreview.net/forum?id=0jsfesDZDq | https://openreview.net/forum?id=0jsfesDZDq | Biswadeep Chakraborty,Beomseok Kang,Harshit Kumar,Saibal Mukhopadhyay | ICLR 2024,Poster | Recurrent Spiking Neural Networks (RSNNs) have emerged as a computationally efficient and brain-inspired machine learning model. The design of sparse RSNNs with fewer neurons and synapses helps reduce the computational complexity of RSNNs. Traditionally, sparse SNNs are obtained by first training a dense and complex SNN for a target task and, next, eliminating neurons with low activity (activity-based pruning) while maintaining task performance. In contrast, this paper presents a task-agnostic methodology for designing sparse RSNNs by pruning an untrained (arbitrarily initialized) large model.
We introduce a novel Lyapunov Noise Pruning (LNP) algorithm that uses graph sparsification methods and utilizes Lyapunov exponents to design a stable sparse RSNN from an untrained RSNN. We show that the LNP can leverage diversity in neuronal timescales to design a sparse Heterogeneous RSNN (HRSNN). Further, we show that the same sparse HRSNN model can be trained for different tasks, such as image classification and time-series prediction. The experimental results show that, in spite of being task-agnostic, LNP increases computational efficiency (fewer neurons and synapses) and prediction performance of RSNNs compared to traditional activity-based pruning of trained dense models. | https://openreview.net/pdf/7e55c6eb14311819313269f9b7f133c8cee91597.pdf |
Investigating the Benefits of Projection Head for Representation Learning | https://openreview.net/forum?id=GgEAdqYPNA | https://openreview.net/forum?id=GgEAdqYPNA | Yihao Xue,Eric Gan,Jiayi Ni,Siddharth Joshi,Baharan Mirzasoleiman | ICLR 2024,Poster | An effective technique for obtaining high-quality representations is adding a projection head on top of the encoder during training, then discarding it and using the pre-projection representations. Despite its proven practical effectiveness, the reason behind the success of this technique is poorly understood. The pre-projection representations are not directly optimized by the loss function, raising the question: what makes them better? In this work, we provide a rigorous theoretical answer to this question. We start by examining linear models trained with self-supervised contrastive loss. We reveal that the implicit bias of training algorithms leads to layer-wise progressive feature weighting, where features become increasingly unequal as we go deeper into the layers. Consequently, lower layers tend to have more normalized and less specialized representations. We theoretically characterize scenarios where such representations are more beneficial, highlighting the intricate interplay between data augmentation and input features. Additionally, we demonstrate that introducing non-linearity into the network allows lower layers to learn features that are completely absent in higher layers. Finally, we show how this mechanism improves the robustness in supervised contrastive learning and supervised learning. We empirically validate our results through various experiments on CIFAR-10/100, UrbanCars and shifted versions of ImageNet. We also introduce a potential alternative to projection head, which offers a more interpretable and controllable design. | https://openreview.net/pdf/274d4b0de6e23f1e7b5f267ab4aa084835b1eec5.pdf |
A Variational Perspective on Solving Inverse Problems with Diffusion Models | https://openreview.net/forum?id=1YO4EE3SPB | https://openreview.net/forum?id=1YO4EE3SPB | Morteza Mardani,Jiaming Song,Jan Kautz,Arash Vahdat | ICLR 2024,Poster | Diffusion models have emerged as a key pillar of foundation models in visual domains. One of their critical applications is to universally solve different downstream inverse tasks via a single diffusion prior without re-training for each task. Most inverse tasks can be formulated as inferring a posterior distribution over data (e.g., a full image) given a measurement (e.g., a masked image). This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable. To cope with this challenge, we propose a variational approach that by design seeks to approximate the true posterior distribution. We show that our approach naturally leads to regularization by denoising diffusion process (RED-diff) where denoisers at different timesteps concurrently impose different structural constraints over the image. To gauge the contribution of denoisers from different timesteps, we propose a weighting mechanism based on signal-to-noise-ratio (SNR). Our approach provides a new variational perspective for solving inverse problems with diffusion models, allowing us to formulate sampling as stochastic optimization, where one can simply apply off-the-shelf solvers with lightweight iterates. Our experiments for various linear and nonlinear image restoration tasks demonstrate the strengths of our method compared with state-of-the-art sampling-based diffusion models. The code is available online \footnote{\url{https://github.com/NVlabs/RED-diff}}. | https://openreview.net/pdf/98565461a56247bb80f25ac785745223d4ec97f7.pdf |
Can Large Language Models Infer Causation from Correlation? | https://openreview.net/forum?id=vqIH0ObdqL | https://openreview.net/forum?id=vqIH0ObdqL | Zhijing Jin,Jiarui Liu,Zhiheng LYU,Spencer Poff,Mrinmaya Sachan,Rada Mihalcea,Mona T. Diab,Bernhard Schölkopf | ICLR 2024,Poster | Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years, existing causal inference datasets in NLP primarily rely on discovering causality from empirical knowledge (e.g., commonsense knowledge). In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs). Specifically, we formulate a novel task Corr2Cause, which takes a set of correlational statements and determines the causal relationship between the variables. We curate a large-scale dataset of more than 200K samples, on which we evaluate seventeen existing LLMs. Through our experiments, we identify a key shortcoming of LLMs in terms of their causal inference skills, and show that these models achieve almost close to random performance on the task. This shortcoming is somewhat mitigated when we try to re-purpose LLMs for this skill via finetuning, but we find that these models still fail to generalize – they can only perform causal inference in in-distribution settings when variable names and textual expressions used in the queries are similar to those in the training set, but fail in out-of-distribution settings generated by perturbing these queries. Corr2Cause is a challenging task for LLMs, and can be helpful in guiding future research on improving LLMs’ pure reasoning skills and generalizability. Our data is at https://huggingface.co/datasets/causalnlp/corr2cause. Our code is at https://github.com/causalNLP/corr2cause. | https://openreview.net/pdf/c32089ff7c53b6e59610da92bace7b326b0a622c.pdf |
Improved statistical and computational complexity of the mean-field Langevin dynamics under structured data | https://openreview.net/forum?id=Of2nEDc4s7 | https://openreview.net/forum?id=Of2nEDc4s7 | Atsushi Nitanda,Kazusato Oko,Taiji Suzuki,Denny Wu | ICLR 2024,Poster | Recent works have shown that neural networks optimized by gradient-based methods can adapt to sparse or low-dimensional target functions through feature learning; an often studied target is the sparse parity function on the unit hypercube. However, such isotropic data setting does not capture the anisotropy and low intrinsic dimensionality exhibited in realistic datasets. In this work, we address this shortcoming by studying how gradient-based feature learning interacts with structured (anisotropic) input data: we consider the classification of $k$-sparse parity on high-dimensional orthotope where the feature coordinates have varying magnitudes, and analyze the learning complexity of the mean-field Langevin dynamics (MFLD), which describes the noisy gradient descent update on two-layer neural network. We show that the statistical complexity (i.e. sample size) and computational complexity (i.e. network width) of MFLD can both be improved when prominent directions of the anisotropic input data align with the support of the target function. Moreover, by employing a coordinate transform determined by the gradient covariance, the width can be made independent of the target degree $k$. Lastly, we demonstrate the benefit of feature learning by establishing a kernel lower bound on the classification error, which applies to neural networks in the lazy regime. | https://openreview.net/pdf/5513243838f990d8d7323b2afa870a627e9c9484.pdf |
Jointly-Learned Exit and Inference for a Dynamic Neural Network | https://openreview.net/forum?id=jX2DT7qDam | https://openreview.net/forum?id=jX2DT7qDam | florence regol,Joud Chataoui,Mark Coates | ICLR 2024,Poster | Large pretrained models, coupled with fine-tuning, are slowly becoming established as the dominant architecture in machine learning. Even though these models offer impressive performance, their practical application is often limited by the prohibitive amount of resources required for $\textit{every}$ inference. Early-exiting dynamic neural networks (EDNN) circumvent this issue by allowing a model to make some of its predictions from intermediate layers (i.e., early-exit). Training an EDNN architecture is challenging as it consists of two intertwined components: the gating mechanism (GM) that controls early-exiting decisions and the intermediate inference modules (IMs) that perform inference from intermediate representations. As a result, most existing approaches rely on thresholding confidence metrics for the gating mechanism and strive to improve the underlying backbone network and the inference modules. Although successful, this approach has two fundamental shortcomings: 1) the GMs and the IMs are decoupled during training, leading to a train-test mismatch; and 2) the thresholding gating mechanism introduces a positive bias into the predictive probabilities, making it difficult to readily extract uncertainty information. We propose a novel architecture that connects these two modules. This leads to significant performance improvements on classification datasets and enables better uncertainty characterization capabilities. | https://openreview.net/pdf/530fc74c8de9fa89f925873092154bf72fa2e92c.pdf |
Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift | https://openreview.net/forum?id=rtl4XnJYBh | https://openreview.net/forum?id=rtl4XnJYBh | Yihao Xue,Siddharth Joshi,Dang Nguyen,Baharan Mirzasoleiman | ICLR 2024,Poster | Recently, multimodal contrastive learning (MMCL) approaches, such as CLIP, have achieved a remarkable success in learning representations that are robust against distribution shift and generalize to new domains. Despite the empirical success, the mechanism behind learning such generalizable representations is not understood. In this work, we rigorously analyze this problem and
uncover two mechanisms behind MMCL's robustness: \emph{intra-class contrasting}, which allows the model to learn features with a high variance, and \emph{inter-class feature sharing}, where annotated details in one class help learning other classes better. Both mechanisms prevent spurious features that are over-represented in the training data to overshadow the generalizable core features. This yields superior zero-shot classification accuracy under distribution shift. Furthermore, we theoretically demonstrate the benefits of using rich captions on robustness and explore the effect of annotating different types of details in the captions. We validate our theoretical findings through experiments, including a well-designed synthetic experiment and an experiment involving training CLIP models on MSCOCO/Conceptual Captions and evaluating them on shifted ImageNets. | https://openreview.net/pdf/6a626c46d27f126eacb827909e73e287e464009d.pdf |
SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking | https://openreview.net/forum?id=FJWT0692hw | https://openreview.net/forum?id=FJWT0692hw | Chris Cundy,Stefano Ermon | ICLR 2024,Poster | In many domains, autoregressive models can attain high likelihood on the task of predicting the next observation. However, this maximum-likelihood (MLE) objective does not necessarily match a downstream use-case of autoregressively generating high-quality sequences. The MLE objective weights sequences proportionally to their frequency under the data distribution, with no guidance for the model's behaviour out of distribution (OOD): leading to compounding error during autoregressive generation. In order to address this compounding error problem, we formulate sequence generation as an imitation learning (IL) problem. This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset, including divergences with weight on OOD generated sequences. The IL framework also allows us to incorporate backtracking by introducing a backspace action into the generation process. This further mitigates the compounding error problem by allowing the model to revert a sampled token if it takes the sequence OOD. Our resulting method, SequenceMatch, can be implemented without adversarial training or major architectural changes. We identify the SequenceMatch-χ2 divergence as a more suitable training objective for autoregressive models which are used for generation. We show that empirically, SequenceMatch training leads to improvements over MLE on text generation with language models and arithmetic | https://openreview.net/pdf/c034cb58427202bff4ad7164db6d76d3e815b9d6.pdf |
Layer-wise linear mode connectivity | https://openreview.net/forum?id=LfmZh91tDI | https://openreview.net/forum?id=LfmZh91tDI | Linara Adilova,Maksym Andriushchenko,Michael Kamp,Asja Fischer,Martin Jaggi | ICLR 2024,Poster | Averaging neural network parameters is an intuitive method for fusing the knowledge of two independent models. It is most prominently used in federated learning. If models are averaged at the end of training, this can only lead to a good performing model if the loss surface of interest is very particular, i.e., the loss in the midpoint between the two models needs to be sufficiently low. This is impossible to guarantee for the non-convex losses of state-of-the-art networks. For averaging models trained on vastly different datasets, it was proposed to average only the parameters of particular layers or combinations of layers, resulting in better performing models. To get a better understanding of the effect of layer-wise averaging, we analyse the performance of the models that result from averaging single layers, or groups of layers. Based on our empirical and theoretical investigation, we introduce a novel notion of the layer-wise linear connectivity, and show that deep networks do not have layer-wise barriers between them. | https://openreview.net/pdf/039a426405636e0c6ee6fd6ea0035e1d20b6bc28.pdf |
Understanding Certified Training with Interval Bound Propagation | https://openreview.net/forum?id=h05eQniJsQ | https://openreview.net/forum?id=h05eQniJsQ | Yuhao Mao,Mark Niklas Mueller,Marc Fischer,Martin Vechev | ICLR 2024,Poster | As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a robustness specification. Curiously, training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounds. Still, we lack a theoretical understanding of the mechanisms making IBP so successful. In this work, we investigate these mechanisms by leveraging a novel metric measuring the tightness of IBP bounds. We first show theoretically that, for deep linear models (DLNs), tightness decreases with width and depth at initialization, but improves with IBP training. We, then, derive sufficient and necessary conditions on weight matrices for IBP bounds to become exact and demonstrate that these impose strong regularization, providing an explanation for the observed robustness-accuracy trade-off. Finally, we show how these results on DLNs transfer to ReLU networks, before conducting an extensive empirical study, (i) confirming this transferability and yielding state-of-the-art certified accuracy, (ii) finding that while all IBP-based training methods lead to high tightness, this increase is dominated by the size of the propagated input regions rather than the robustness specification, and finally (iii) observing that non-IBP-based methods do not increase tightness. Together, these results help explain the success of recent certified training methods and may guide the development of new ones. | https://openreview.net/pdf/3368ca8a5f0ec99ee88301ab1de8634647f5ce72.pdf |
Offline RL with Observation Histories: Analyzing and Improving Sample Complexity | https://openreview.net/forum?id=GnOLWS4Llt | https://openreview.net/forum?id=GnOLWS4Llt | Joey Hong,Anca Dragan,Sergey Levine | ICLR 2024,Poster | Offline reinforcement learning (RL) can in principle synthesize more optimal behavior from a dataset consisting only of suboptimal trials. One way that this can happen is by "stitching" together the best parts of otherwise suboptimal trajectories that overlap on similar states, to create new behaviors where each individual state is in-distribution, but the overall returns are higher. However, in many interesting and complex applications, such as autonomous navigation and dialogue systems, the state is partially observed. Even worse, the state representation is unknown or not easy to define. In such cases, policies and value functions are often conditioned on observation histories instead of states. In these cases, it is not clear if the same kind of "stitching" is feasible at the level of observation histories, since two different trajectories would always have different histories, and thus "similar states" that might lead to effective stitching cannot be leveraged. Theoretically, we show that standard offline RL algorithms conditioned on observation histories suffer from poor sample complexity, in accordance with the above intuition. We then identify sufficient conditions under which offline RL can still be efficient -- intuitively, it needs to learn a compact representation of history comprising only features relevant for action selection. We introduce a bisimulation loss that captures the extent to which this happens, and propose that offline RL can explicitly optimize this loss to aid worst-case sample complexity. Empirically, we show that across a variety of tasks either our proposed loss improves performance, or the value of this loss is already minimized as a consequence of standard offline RL, indicating that it correlates well with good performance. | https://openreview.net/pdf/efd5f3cf56edfff0bb4ff14fb974d994bfaa976f.pdf |
Graph-based Virtual Sensing from Sparse and Partial Multivariate Observations | https://openreview.net/forum?id=CAqdG2dy5s | https://openreview.net/forum?id=CAqdG2dy5s | Giovanni De Felice,Andrea Cini,Daniele Zambon,Vladimir Gusev,Cesare Alippi | ICLR 2024,Poster | Virtual sensing techniques allow for inferring signals at new unmonitored locations by exploiting spatio-temporal measurements coming from physical sensors at different locations. However, as the sensor coverage becomes sparse due to costs or other constraints, physical proximity cannot be used to support interpolation. In this paper, we overcome this challenge by leveraging dependencies between the target variable and a set of correlated variables (covariates) that can frequently be associated with each location of interest. From this viewpoint, covariates provide partial observability, and the problem consists of inferring values for unobserved channels by exploiting observations at other locations to learn how such variables can correlate. We introduce a novel graph-based methodology to exploit such relationships and design a graph deep learning architecture, named GgNet, implementing the framework. The proposed approach relies on propagating information over a nested graph structure that is used to learn dependencies between variables as well as locations. GgNet is extensively evaluated under different virtual sensing scenarios, demonstrating higher reconstruction accuracy compared to the state-of-the-art. | https://openreview.net/pdf/1f134ca075964391b30baa22a0bf6a338d01598c.pdf |
NEFTune: Noisy Embeddings Improve Instruction Finetuning | https://openreview.net/forum?id=0bMmZ3fkCk | https://openreview.net/forum?id=0bMmZ3fkCk | Neel Jain,Ping-yeh Chiang,Yuxin Wen,John Kirchenbauer,Hong-Min Chu,Gowthami Somepalli,Brian R. Bartoldson,Bhavya Kailkhura,Avi Schwarzschild,Aniruddha Saha,Micah Goldblum,Jonas Geiping,Tom Goldstein | ICLR 2024,Poster | We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation.
NEFTune adds noise to the embedding vectors during training.
Standard finetuning of LLaMA-2-7B using Alpaca achieves $29.79$\% on AlpacaEval, which rises to $64.69$\% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets.
Models trained with Evol-Instruct see a $10$\% improvement, with ShareGPT an $8$\% improvement, and with OpenPlatypus an $8$\% improvement.
Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune. Particularly, we see these improvements on the conversational abilities of the instruction model and not on traditional tasks like those on the OpenLLM Leaderboard, where performance is the same. | https://openreview.net/pdf/ab62341bb5a8f427d4042033bd3b7c9f59652b51.pdf |
An operator preconditioning perspective on training in physics-informed machine learning | https://openreview.net/forum?id=WWlxFtR5sV | https://openreview.net/forum?id=WWlxFtR5sV | Tim De Ryck,Florent Bonnet,Siddhartha Mishra,Emmanuel de Bezenac | ICLR 2024,Poster | In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize residuals connected to partial differential equations (PDEs). Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator. This operator, in turn, is associated to the Hermitian square of the differential operator of the underlying PDE. If this operator is ill-conditioned, it results in slow or infeasible training. Therefore, preconditioning this operator is crucial. We employ both rigorous mathematical analysis and empirical evaluations to investigate various strategies, explaining how they better condition this critical operator, and consequently improve training. | https://openreview.net/pdf/bb2db7fa354d99feb2b3b02f70bf881aab7de2f6.pdf |
Two-stage LLM Fine-tuning with Less Specialization and More Generalization | https://openreview.net/forum?id=pCEgna6Qco | https://openreview.net/forum?id=pCEgna6Qco | Yihan Wang,Si Si,Daliang Li,Michal Lukasik,Felix Yu,Cho-Jui Hsieh,Inderjit S Dhillon,Sanjiv Kumar | ICLR 2024,Poster | Pretrained large language models (LLMs) are general purpose problem solvers applicable to a diverse set of tasks with prompts. They can be further improved towards a specific task by fine-tuning on a specialized dataset. However, fine-tuning usually makes the model narrowly specialized on this dataset with reduced general in-context learning performances, which is undesirable whenever the fine-tuned model needs to handle additional tasks where no fine-tuning data is available.
In this work, we first demonstrate that fine-tuning on a single task indeed decreases LLMs' general in-context learning performance. We discover one important cause of such forgetting, format specialization, where the model overfits to the format of the fine-tuned task.We further show that format specialization happens at the very beginning of fine-tuning. To solve this problem, we propose Prompt Tuning with MOdel Tuning (ProMoT), a simple yet effective two-stage fine-tuning framework that reduces format specialization and improves generalization.ProMoT offloads task-specific format learning into additional and removable parameters by first doing prompt tuning and then fine-tuning the model itself with this soft prompt attached.
With experiments on several fine-tuning tasks and 8 in-context evaluation tasks, we show that ProMoT achieves comparable performance on fine-tuned tasks to standard fine-tuning, but with much less loss of in-context learning performances across a board range of out-of-domain evaluation tasks. More importantly, ProMoT can even enhance generalization on in-context learning tasks that are semantically related to the fine-tuned task, e.g. ProMoT on En-Fr translation significantly improves performance on other language pairs, and ProMoT on NLI improves performance on summarization.
Experiments also show that ProMoT can improve the generalization performance of multi-task training. | https://openreview.net/pdf/967d482dc9ddfbf84e0116bc826d1f2205123e77.pdf |
Expressive Losses for Verified Robustness via Convex Combinations | https://openreview.net/forum?id=mzyZ4wzKlM | https://openreview.net/forum?id=mzyZ4wzKlM | Alessandro De Palma,Rudy R Bunel,Krishnamurthy Dj Dvijotham,M. Pawan Kumar,Robert Stanforth,Alessio Lomuscio | ICLR 2024,Poster | In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance.
As shown in recent work, better trade-offs between accuracy and robustness can be obtained by carefully coupling adversarial training with over-approximations.
We hypothesize that the expressivity of a loss function, which we formalize as the ability to span a range of trade-offs between lower and upper bounds to the worst-case loss through a single parameter (the over-approximation coefficient), is key to attaining state-of-the-art performance.
To support our hypothesis, we show that trivial expressive losses, obtained via convex combinations between adversarial attacks and IBP bounds, yield state-of-the-art results across a variety of settings in spite of their conceptual simplicity.
We provide a detailed analysis of the relationship between the over-approximation coefficient and performance profiles across different expressive losses, showing that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs. | https://openreview.net/pdf/40ce9bb1de770f815cc5905ce78d2b4957712ed0.pdf |
Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network | https://openreview.net/forum?id=5RielfrDkP | https://openreview.net/forum?id=5RielfrDkP | Tianze Luo,Zhanfeng Mo,Sinno Jialin Pan | ICLR 2024,Poster | Graph Neural Networks are popular tools in graph representation learning that capture the graph structural properties. However, most GNNs employ single-resolution graph feature extraction, thereby failing to capture micro-level local patterns (high resolution) and macro-level graph cluster and community patterns (low resolution) simultaneously. Many multiresolution methods have been developed to capture graph patterns at multiple scales, but most of them depend on predefined and handcrafted multiresolution transforms that remain fixed throughout the training process once formulated. Due to variations in graph instances and distributions, fixed handcrafted transforms can not effectively tailor multiresolution representations to each graph instance. To acquire multiresolution representation suited to different graph instances and distributions, we introduce the Multiresolution Meta-Framelet-based Graph Convolutional Network (MM-FGCN), facilitating comprehensive and adaptive multiresolution analysis across diverse graphs. Extensive experiments demonstrate that our MM-FGCN achieves SOTA performance on various graph learning tasks. | https://openreview.net/pdf/78d38cf91a8f58ca68193e324a079c22002fffa8.pdf |
REFACTOR: Learning to Extract Theorems from Proofs | https://openreview.net/forum?id=fgKjiVrm6u | https://openreview.net/forum?id=fgKjiVrm6u | Jin Peng Zhou,Yuhuai Wu,Qiyang Li,Roger Baker Grosse | ICLR 2024,Poster | Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract 19.6\% of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted 16 new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of 733.5 times, and help shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves more test theorems and outperforms state-of-the-art baselines by frequently leveraging a diverse set of newly extracted theorems. Code can be found at https://github.com/jinpz/refactor. | https://openreview.net/pdf/e4944ce26133003e30b0fe056311fca837baa9a1.pdf |
Let's do the time-warp-attend: Learning topological invariants of dynamical systems | https://openreview.net/forum?id=Fj7Fzm5lWL | https://openreview.net/forum?id=Fj7Fzm5lWL | Noa Moriel,Matt Ricci,Mor Nitzan | ICLR 2024,Poster | Dynamical systems across the sciences, from electrical circuits to ecological networks, undergo qualitative and often catastrophic changes in behavior, called bifurcations, when their underlying parameters cross a threshold. Existing methods predict oncoming catastrophes in individual systems but are primarily time-series-based and struggle both to categorize qualitative dynamical regimes across diverse systems and to generalize to real data. To address this challenge, we propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features. We focus on the paradigmatic case of the supercritical Hopf bifurcation, which is used to model periodic dynamics across a wide range of applications. Our convolutional attention method is trained with data augmentations that encourage the learning of topological invariants which can be used to detect bifurcation boundaries in unseen systems and to design models of biological systems like oscillatory gene regulatory networks. We further demonstrate our method's use in analyzing real data by recovering distinct proliferation and differentiation dynamics along pancreatic endocrinogenesis trajectory in gene expression space based on single-cell data. Our method provides valuable insights into the qualitative, long-term behavior of a wide range of dynamical systems, and can detect bifurcations or catastrophic transitions in large-scale physical and biological systems. | https://openreview.net/pdf/8dc9a0a901c8cc5a3348f7f9bb9b466a836c04e9.pdf |
Sparse MoE with Language Guided Routing for Multilingual Machine Translation | https://openreview.net/forum?id=ySS7hH1smL | https://openreview.net/forum?id=ySS7hH1smL | Xinyu Zhao,Xuxi Chen,Yu Cheng,Tianlong Chen | ICLR 2024,Poster | Sparse Mixture-of-Experts (SMoE) has gained increasing popularity as a promising framework for scaling up multilingual machine translation (MMT) models with negligible extra computational overheads. However, current SMoE solutions neglect the intrinsic structures of the MMT problem: ($a$) $\textit{Linguistics Hierarchy.}$ Languages are naturally grouped according to their lingual properties like genetic families, phonological characteristics, etc; ($b$) $\textit{Language Complexity.}$ The learning difficulties are varied for diverse languages due to their grammar complexity, available resources, etc. Therefore, routing a fixed number of experts (e.g., $1$ or $2$ experts in usual) only at the word level leads to inferior performance. To fill in the missing puzzle, we propose $\textbf{\texttt{Lingual-SMoE}}$ by equipping the SMoE with adaptive and linguistic-guided routing policies. Specifically, it ($1$) extracts language representations to incorporate linguistic knowledge and uses them to allocate experts into different groups; ($2$) determines the number of activated experts for each target language in an adaptive and automatic manner, according to their translation difficulties, which aims to mitigate the potential over-/under-fitting issues of learning simple/challenges translations. Sufficient experimental studies on MMT benchmarks with {$16$, $50$, $100$} language pairs and various network architectures, consistently validate the superior performance of our proposals. For instance, $\texttt{Lingual-SMoE}$ outperforms its dense counterpart by over $5\%$ BLEU scores on $\texttt{OPUS-100}$ dataset. | https://openreview.net/pdf/2f14a8bdcce65ee1907499fef8fcdf05b7ce4f91.pdf |
Detecting Pretraining Data from Large Language Models | https://openreview.net/forum?id=zWqr3MQuNs | https://openreview.net/forum?id=zWqr3MQuNs | Weijia Shi,Anirudh Ajith,Mengzhou Xia,Yangsibo Huang,Daogao Liu,Terra Blevins,Danqi Chen,Luke Zettlemoyer | ICLR 2024,Poster | Although large language models (LLMs) are widely deployed, the data used to train them is rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but certain that it includes potentially problematic text such as copyrighted materials, personally identifiable information, and test data for widely reported reference benchmarks. However, we currently have no way to know which data of these types is included or in what proportions. In this paper, we study the pretraining data detection problem: given a piece of text and black-box access to an LLM without knowing the pretraining data, can we determine if the model was trained on the provided text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that uses data created before and after model training to support gold truth detection. We also introduce a new detection method MIN-K PROB based on a simple hypothesis: an unseen example is likely to contain a few outlier words with low probabilities under the LLM, while a seen example is less likely to have words with such low probabilities. MIN-K PROB can be applied without any knowledge about the pretrainig corpus or any additional training, departing from previous detection methods that require training a reference model on data that is similar to the pretraining data. Moreover, our experiments demonstrate that MIN-K PROB achieves a 7.4% improvement on WIKIMIA over these previous methods. We apply MIN-K PROB to two real-world scenarios, copyrighted book detection and contaminated downstream example detection, and find that it to be a consistently effective solution. | https://openreview.net/pdf/0d224da3c14b884532028169f92fd03868dcd86e.pdf |
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization | https://openreview.net/forum?id=V5tdi14ple | https://openreview.net/forum?id=V5tdi14ple | Jin Peng Zhou,Charles E Staats,Wenda Li,Christian Szegedy,Kilian Q Weinberger,Yuhuai Wu | ICLR 2024,Poster | Large language models (LLM), such as Google's Minerva and OpenAI's GPT families, are becoming increasingly capable of solving mathematical quantitative reasoning problems. However, they still make unjustified logical and computational errors in their reasoning steps and answers. In this paper, we leverage the fact that if the training corpus of LLMs contained sufficiently many examples of formal mathematics (e.g. in Isabelle, a formal theorem proving environment), they can be prompted to translate i.e. autoformalize informal mathematical statements into formal Isabelle code --- which can be verified automatically for internal consistency. This provides a mechanism to automatically reject solutions whose formalized versions are inconsistent within themselves or with the formalized problem statement. We evaluate our method on GSM8K, MATH and MultiArith datasets and demonstrate that our approach provides a consistently better heuristic than vanilla majority voting --- the previously best method to identify correct answers, by more than 12\% on GSM8K. In our experiments it improves results consistently across all datasets and LLM model sizes. The code can be found at https://github.com/jinpz/dtv. | https://openreview.net/pdf/f2feee3c9af794ca03a85d1ab69dd06c8330981e.pdf |
PubDef: Defending Against Transfer Attacks From Public Models | https://openreview.net/forum?id=Tvwf4Vsi5F | https://openreview.net/forum?id=Tvwf4Vsi5F | Chawin Sitawarin,Jaewon Chang,David Huang,Wesson Altoyan,David Wagner | ICLR 2024,Poster | Adversarial attacks have been a looming and unaddressed threat in the industry. However, through a decade-long history of the robustness evaluation literature, we have learned that mounting a strong or optimal attack is challenging. It requires both machine learning and domain expertise. In other words, the white-box threat model, religiously assumed by a large majority of the past literature, is unrealistic. In this paper, we propose a new practical threat model where the adversary relies on transfer attacks through publicly available surrogate models. We argue that this setting will become the most prevalent for security-sensitive applications in the future. We evaluate the transfer attacks in this setting and propose a specialized defense method based on a game-theoretic perspective. The defenses are evaluated under 24 public models and 11 attack algorithms across three datasets (CIFAR-10, CIFAR-100, and ImageNet). Under this threat model, our defense, PubDef, outperforms the state-of-the-art white-box adversarial training by a large margin with almost no loss in the normal accuracy. For instance, on ImageNet, our defense achieves 62% accuracy under the strongest transfer attack vs only 36% of the best adversarially trained model. Its accuracy when not under attack is only 2% lower than that of an undefended model (78% vs 80%). We release our code at https://github.com/wagner-group/pubdef. | https://openreview.net/pdf/1c48a415047176b3fbdb5c594a119648fea8e3d8.pdf |
AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ | https://openreview.net/forum?id=v3K5TVP8kZ | https://openreview.net/forum?id=v3K5TVP8kZ | Jonas Belouadi,Anne Lauscher,Steffen Eger | ICLR 2024,Poster | Generating bitmap graphics from text has gained considerable attention, yet for scientific figures, vector graphics are often preferred. Given that vector graphics are typically encoded using low-level graphics primitives, generating them directly is difficult. To address this, we propose the use of TikZ, a well-known abstract graphics language that can be compiled to vector graphics, as an intermediate representation of scientific figures. TikZ offers human-oriented, high-level commands, thereby facilitating conditional language modeling with any large language model. To this end, we introduce DaTikZ the first large-scale TikZ dataset, consisting of 120k TikZ drawings aligned with captions. We fine-tune LLaMA on DaTikZ, as well as our new model CLiMA, which augments LLaMA with multimodal CLIP embeddings. In both human and automatic evaluation, CLiMA and LLaMA outperform commercial GPT-4 and Claude 2 in terms of similarity to human-created figures, with CLiMA additionally improving text-image alignment. Our detailed analysis shows that all models generalize well and are not susceptible to memorization. GPT-4 and Claude 2, however, tend to generate more simplistic figures compared to both humans and our models. We make our framework, AutomaTikZ, along with model weights and datasets, publicly available. | https://openreview.net/pdf/eb9ccf8fa1a97129a5a38ac09ddf4f1257daa864.pdf |
Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning | https://openreview.net/forum?id=3xDaj4pRna | https://openreview.net/forum?id=3xDaj4pRna | Jacob Mitchell Springer,Vaishnavh Nagarajan,Aditi Raghunathan | ICLR 2024,Poster | Sharpness-Aware Minimization (SAM) has emerged as a promising alternative optimizer to stochastic gradient descent (SGD). The originally-proposed motivation behind SAM was to bias neural networks towards flatter minima that are believed to generalize better. However, recent studies have shown conflicting evidence on the relationship between flatness and generalization, suggesting that flatness does fully explain SAM's success. Sidestepping this debate, we identify an orthogonal effect of SAM that is beneficial out-of-distribution: we argue that SAM implicitly balances the quality of diverse features. SAM achieves this effect by adaptively suppressing well-learned features which gives remaining features opportunity to be learned. We show that this mechanism is beneficial in datasets that contain redundant or spurious features where SGD falls for the simplicity bias and would not otherwise learn all available features. Our insights are supported by experiments on real data: we demonstrate that SAM improves the quality of features in datasets containing redundant or spurious features, including CelebA, Waterbirds, CIFAR-MNIST, and DomainBed. | https://openreview.net/pdf/b581b037f272dae403bd0933d1d9c3f7c144a1a9.pdf |
Can LLM-Generated Misinformation Be Detected? | https://openreview.net/forum?id=ccxD4mtkTU | https://openreview.net/forum?id=ccxD4mtkTU | Canyu Chen,Kai Shu | ICLR 2024,Poster | The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures. | https://openreview.net/pdf/4070fd2a7564a91bda1c7908c2650e500a1d18d8.pdf |
A Simple Interpretable Transformer for Fine-Grained Image Classification and Analysis | https://openreview.net/forum?id=bkdWThqE6q | https://openreview.net/forum?id=bkdWThqE6q | DIPANJYOTI PAUL,Arpita Chowdhury,Xinqi Xiong,Feng-Ju Chang,David Edward Carlyn,Samuel Stevens,Kaiya L Provost,Anuj Karpatne,Bryan Carstens,Daniel Rubenstein,Charles Stewart,Tanya Berger-Wolf,Yu Su,Wei-Lun Chao | ICLR 2024,Poster | We present a novel usage of Transformers to make image classification interpretable. Unlike mainstream classifiers that wait until the last fully connected layer to incorporate class information to make predictions, we investigate a proactive approach, asking each class to search for itself in an image. We realize this idea via a Transformer encoder-decoder inspired by DEtection TRansformer (DETR). We learn ''class-specific'' queries (one for each class) as input to the decoder, enabling each class to localize its patterns in an image via cross-attention. We name our approach INterpretable TRansformer (INTR), which is fairly easy to implement and exhibits several compelling properties. We show that INTR intrinsically encourages each class to attend distinctively; the cross-attention weights thus provide a faithful interpretation of the prediction. Interestingly, via ''multi-head'' cross-attention, INTR could identify different ''attributes'' of a class, making it particularly suitable for fine-grained classification and analysis, which we demonstrate on eight datasets. Our code and pre-trained models are publicly accessible at the Imageomics Institute GitHub site: https://github.com/Imageomics/INTR. | https://openreview.net/pdf/edc88ec0b85f5a82a7aed747683ae161b423d1ab.pdf |
One-shot Active Learning Based on Lewis Weight Sampling for Multiple Deep Models | https://openreview.net/forum?id=EDXkkUAIFW | https://openreview.net/forum?id=EDXkkUAIFW | Sheng-Jun Huang,Yi Li,Yiming Sun,Ying-Peng Tang | ICLR 2024,Poster | Active learning (AL) for multiple target models aims to reduce labeled data querying while effectively training multiple models concurrently. Existing AL algorithms often rely on iterative model training, which can be computationally expensive, particularly for deep models. In this paper, we propose a one-shot AL method to address this challenge, which performs all label queries without repeated model training. Specifically, we extract different representations of the same dataset using distinct network backbones, and actively learn the linear prediction layer on each representation via an $\ell_p$-regression formulation. The regression problems are solved approximately by
sampling and reweighting the unlabeled instances based on their maximum Lewis weights across the representations. An upper bound on the number of samples needed is provided with a rigorous analysis for $p\in [1, +\infty)$. Experimental results on 11 benchmarks show that our one-shot approach achieves competitive performances with the state-of-the-art AL methods for multiple target models. | https://openreview.net/pdf/151a8c3cbd0951af8b2abf96dd2fee8beaee4ae7.pdf |
Disentangling Time Series Representations via Contrastive Independence-of-Support on l-Variational Inference | https://openreview.net/forum?id=iI7hZSczxE | https://openreview.net/forum?id=iI7hZSczxE | Khalid Oublal,Said Ladjal,David Benhaiem,Emmanuel LE BORGNE,François Roueff | ICLR 2024,Poster | Learning disentangled representations for time series is a promising path to facilitate reliable generalization to in- and out-of distribution (OOD), offering benefits like feature derivation and improved interpretability and fairness, thereby enhancing downstream tasks. We focus on disentangled representation learning for home appliance electricity usage, enabling users to understand and optimize their consumption for a reduced carbon footprint. Our approach frames the problem as disentangling each attribute's role in total consumption. Unlike existing methods assuming attribute independence which leads to non-identiability, we acknowledge real-world time series attribute correlations, learned up to a smooth bijection using contrastive learning and a single autoencoder. To address this, we propose a Disentanglement under Independence-Of-Support via Contrastive Learning (DIOSC), facilitating representation generalization across diverse correlated scenarios. Our method utilizes innovative \textit{l}-variational inference layers with self-attention, effectively addressing temporal dependencies across bottom-up and top-down networks. We find that DIOSC can enhance the task of representation of time series electricity consumption. We introduce TDS (Time Disentangling Score) to gauge disentanglement quality. TDS reliably reflects disentanglement performance, making it a valuable metric for evaluating time series representations disentanglement. Code available at https://institut-polytechnique-de-paris.github.io/time-disentanglement-lib. | https://openreview.net/pdf/ef5c6cb813437c7c5c77405a079228ac7c8b50bd.pdf |
Improved algorithm and bounds for successive projection | https://openreview.net/forum?id=GlpawHh80l | https://openreview.net/forum?id=GlpawHh80l | Jiashun Jin,Tracy Ke,Gabriel Moryoussef,Jiajun Tang,Jingming Wang | ICLR 2024,Poster | Consider a $K$-vertex simplex in a $d$-dimensional space. We measure $n$ points on the simplex, but due to the measurement noise,
some of the observed points fall outside the simplex. The interest is vertex hunting (i.e., estimating the vertices of the simplex). The successive projection algorithm (SPA) is one of the most popular approaches to vertex hunting, but it is vulnerable to noise and outliers, and may perform unsatisfactorily. We propose pseudo-point SPA (pp-SPA) as a new approach to vertex hunting. The approach contains
two novel ideas (a projection step and a denoise step) and generates roughly $n$ pseudo-points, which can be fed in to SPA for vertex hunting. For theory, we first derive an improved non-asymptotic bound for the orthodox SPA, and then use the result to derive the bounds for pp-SPA. Compared with the orthodox SPA, pp-SPA has a faster rate and more satisfactory numerical performance in a broad setting. The analysis is quite delicate: the non-asymptotic bound is hard to derive, and we need precise results on the extreme values of (possibly) high-dimensional random vectors. | https://openreview.net/pdf/7ebdcea695b8aa3a70d605879db5781a831c5f96.pdf |
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF | https://openreview.net/forum?id=0tWTxYYPnW | https://openreview.net/forum?id=0tWTxYYPnW | Anand Siththaranjan,Cassidy Laidlaw,Dylan Hadfield-Menell | ICLR 2024,Poster | In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called *Borda count*. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called *distributional preference learning* (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability. | https://openreview.net/pdf/6a63d0436aca59c5082cae0bf2ea3a6c0aea6257.pdf |
Estimating Shape Distances on Neural Representations with Limited Samples | https://openreview.net/forum?id=kvByNnMERu | https://openreview.net/forum?id=kvByNnMERu | Dean A Pospisil,Brett W. Larsen,Sarah E Harvey,Alex H Williams | ICLR 2024,Poster | Measuring geometric similarity between high-dimensional network representations is a topic of longstanding interest to neuroscience and deep learning. Although many methods have been proposed, only a few works have rigorously analyzed their statistical efficiency or quantified estimator uncertainty in data-limited regimes. Here, we derive upper and lower bounds on the worst-case convergence
of standard estimators of shape distance—a measure of representational dissimilarity proposed by Williams et al. (2021). These bounds reveal the challenging nature of the problem in high-dimensional feature spaces. To overcome these challenges, we introduce a novel method-of-moments estimator with a tunable bias-variance tradeoff parameterized by an upper bound on bias. We show that this estimator achieves superior performance to standard estimators in simulation and on neural data, particularly in high-dimensional settings. Our theoretical work and estimator thus respectively define and dramatically expand the scope of neural data for which geometric similarity can be accurately measured. | https://openreview.net/pdf/cc1e959c8a6eec0004bd40509131279fa05e6610.pdf |
Learning semilinear neural operators: A unified recursive framework for prediction and data assimilation. | https://openreview.net/forum?id=ZMv6zKYYUs | https://openreview.net/forum?id=ZMv6zKYYUs | Ashutosh Singh,Ricardo Augusto Borsoi,Deniz Erdogmus,Tales Imbiriba | ICLR 2024,Poster | Recent advances in the theory of Neural Operators (NOs) have enabled fast and accurate computation of the solutions to complex systems described by partial differential equations (PDEs). Despite their great success, current NO-based solutions face important challenges when dealing with spatio-temporal PDEs over long time scales. Specifically, the current theory of NOs does not present a systematic framework to perform data assimilation and efficiently correct the evolution of PDE solutions over time based on sparsely sampled noisy measurements. In this paper, we propose a learning-based state-space approach to compute the solution operators to infinite-dimensional semilinear PDEs. Exploiting the structure of semilinear PDEs and the theory of nonlinear observers in function spaces, we develop a flexible recursive method that allows for both prediction and data assimilation by combining prediction and correction operations. The proposed framework is capable of producing fast and accurate predictions over long time horizons, dealing with irregularly sampled noisy measurements to correct the solution, and benefits from the decoupling between the spatial and temporal dynamics of this class of PDEs. We show through experiments on the Kuramoto-Sivashinsky, Navier-Stokes and Korteweg-de Vries equations that the proposed model is robust to noise and can leverage arbitrary amounts of measurements to correct its prediction over a long time horizon with little computational overhead. | https://openreview.net/pdf/1560d1f0bd3603f9f3ab40125342ea9969adfe14.pdf |
Eureka: Human-Level Reward Design via Coding Large Language Models | https://openreview.net/forum?id=IEduRUO55F | https://openreview.net/forum?id=IEduRUO55F | Yecheng Jason Ma,William Liang,Guanzhi Wang,De-An Huang,Osbert Bastani,Dinesh Jayaraman,Yuke Zhu,Linxi Fan,Anima Anandkumar | ICLR 2024,Poster | Large Language Models (LLMs) have excelled as high-level semantic planners for sequential decision-making tasks. However, harnessing them to learn complex low-level manipulation tasks, such as dexterous pen spinning, remains an open problem. We bridge this fundamental gap and present Eureka, a human-level reward design algorithm powered by LLMs. Eureka exploits the remarkable zero-shot generation, code-writing, and in-context improvement capabilities of state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over reward code. The resulting rewards can then be used to acquire complex skills via reinforcement learning. Without any task-specific prompting or pre-defined reward templates, Eureka generates reward functions that outperform expert human-engineered rewards. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies, Eureka outperforms human experts on 83% of the tasks, leading to an average normalized improvement of 52%. The generality of Eureka also enables a new gradient-free in-context learning approach to reinforcement learning from human feedback (RLHF), readily incorporating human inputs to improve the quality and the safety of the generated rewards without model updating. Finally, using Eureka rewards in a curriculum learning setting, we demonstrate for the first time, a simulated Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a pen in circles at rapid speed. | https://openreview.net/pdf/6c6607629a103a06c7c1b52817845f25aa866b8b.pdf |
f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization | https://openreview.net/forum?id=s90VIdza2K | https://openreview.net/forum?id=s90VIdza2K | Sina Baharlouei,Shivam Patel,Meisam Razaviyayn | ICLR 2024,Poster | Training and deploying machine learning models that meet fairness criteria for protected groups are fundamental in modern artificial intelligence.
While numerous constraints and regularization terms have been proposed in the literature to promote fairness in machine learning tasks, most of these approaches are not amenable to stochastic optimization due to the complex and nonlinear structure of constraints and regularizers. Here, the term ``stochastic'' refers to the ability of the algorithm to work with small mini-batches of data. Motivated by the limitation of existing literature, this paper presents a unified stochastic optimization framework for fair empirical risk minimization based on $f$-divergence measures ($f$-FERM). The proposed stochastic algorithm enjoys theoretical convergence guarantees. In addition, our experiments demonstrate the superiority of fairness-accuracy tradeoffs offered by $f$-FERM for almost all batch sizes (ranging from full-batch to batch size of one). Moreover, we show that our framework can be extended to the case where there is a distribution shift from training to the test data.
Our extension is based on a distributionally robust optimization reformulation of $f$-FERM objective under $\ell_p$ norms as uncertainty sets. Again, in this distributionally robust setting, $f$-FERM not only enjoys theoretical convergence guarantees but also outperforms other baselines in the literature in the tasks involving distribution shifts.
An efficient stochastic implementation of $f$-FERM is publicly available. | https://openreview.net/pdf/1d2b7d920918fe102d32b04476b29c2c607925bd.pdf |
Source-Free and Image-Only Unsupervised Domain Adaptation for Category Level Object Pose Estimation | https://openreview.net/forum?id=UPvufoBAIs | https://openreview.net/forum?id=UPvufoBAIs | Prakhar Kaushik,Aayush Mishra,Adam Kortylewski,Alan Yuille | ICLR 2024,Poster | We consider the problem of source-free unsupervised category-level 3D pose estimation from only RGB images to an non-annotated and unlabelled target domain without any access to source domain data or annotations during adaptation. Collecting and annotating real world 3D data and corresponding images is laborious, expensive yet unavoidable process since even 3D pose domain adaptation methods require 3D data in the target domain. We introduce a method which is capable of adapting to a nuisance ridden target domain without any 3D data or annotations. We represent object categories as simple cuboid meshes, and harness a generative model of neural feature activations modeled as a von Mises Fisher distribution at each mesh vertex learnt using differential rendering. We focus on individual mesh vertex features and iteratively update them based on their proximity to corresponding features in the target domain. Our key insight stems from the observation that specific object subparts remain stable across out-of-domain (OOD) scenarios, enabling strategic utilization of these invariant subcomponents for effective model updates. Our model is then trained in an EM fashion alternating between updating the vertex features and feature extractor. We show that our method simulates fine-tuning on a global-pseudo labelled dataset under mild assumptions which converges to the target domain asymptotically. Through extensive empirical validation, we demonstrate the potency of our simple approach in addressing the domain shift challenge and significantly enhancing pose estimation accuracy. By accentuating robust and less changed object subcomponents, our framework contributes to the evolution of UDA techniques in the context of 3D pose estimation using only images from the target domain. | https://openreview.net/pdf/2713fa61c35aea3d714ed1c74104eb90194b1f28.pdf |
Closing the Curious Case of Neural Text Degeneration | https://openreview.net/forum?id=dONpC9GL1o | https://openreview.net/forum?id=dONpC9GL1o | Matthew Finlayson,John Hewitt,Alexander Koller,Swabha Swayamdipta,Ashish Sabharwal | ICLR 2024,Poster | Despite their ubiquity in language generation, it remains unknown why truncation sampling heuristics like nucleus sampling are so effective. We provide a theoretical explanation for the effectiveness of the truncation sampling by proving that truncation methods that discard tokens below some probability threshold (the most common type of truncation) can guarantee that all sampled tokens have nonzero true probability. However, thresholds are a coarse heuristic, and necessarily discard some tokens with nonzero true probability as well. In pursuit of a more precise sampling strategy, we show that we can leverage a known source of model errors, the softmax bottleneck, to prove that certain tokens have nonzero true probability, without relying on a threshold. Based on our findings, we develop an experimental truncation strategy and the present pilot studies demonstrating the promise of this type of algorithm. Our evaluations show that our method outperforms its threshold-based counterparts under automatic and human evaluation metrics for low-entropy (i.e., close to greedy) open-ended text generation. Our theoretical findings and pilot experiments provide both insight into why truncation sampling works, and make progress toward more expressive sampling algorithms that better surface the generative capabilities of large language models. | https://openreview.net/pdf/3208517640b1a7c17b6a44cc2d44f67bdc006e33.pdf |
Mediator Interpretation and Faster Learning Algorithms for Linear Correlated Equilibria in General Sequential Games | https://openreview.net/forum?id=bsKMPAFHO7 | https://openreview.net/forum?id=bsKMPAFHO7 | Brian Hu Zhang,Gabriele Farina,Tuomas Sandholm | ICLR 2024,Poster | A recent paper by Farina and Pipis (2023) established the existence of uncoupled no-linear-swap regret dynamics with polynomial-time iterations in extensive-form games. The equilibrium points reached by these dynamics, known as linear correlated equilibria, are currently the tightest known relaxation of correlated equilibrium that can be learned in polynomial time in any finite extensive-form game. However, their properties remain vastly unexplored, and their computation is onerous. In this paper, we provide several contributions shedding light on the fundamental nature of linear-swap regret. First, we show a connection between linear deviations and a generalization of communication deviations in which the player can make queries to a ``mediator'' who replies with action recommendations, and, critically, the player is not constrained to match the timing of the game as would be the case for communication deviations. We coin this latter set the untimed communication (UTC) deviations. We show that the UTC deviations coincide precisely with the linear deviations, and therefore that any player minimizing UTC regret also minimizes linear-swap regret. We then leverage this connection to develop state-of-the-art no-regret algorithms for computing linear correlated equilibria, both in theory and in practice. In theory, our algorithms achieve polynomially better per-iteration runtimes; in practice, our algorithms represent the state of the art by several orders of magnitude. | https://openreview.net/pdf/cc579eed7f8cd682cf460a081609c0c02ed9a3b9.pdf |
3D Feature Prediction for Masked-AutoEncoder-Based Point Cloud Pretraining | https://openreview.net/forum?id=LokR2TTFMs | https://openreview.net/forum?id=LokR2TTFMs | Siming Yan,Yuqi Yang,Yu-Xiao Guo,Hao Pan,Peng-Shuai Wang,Xin Tong,Yang Liu,Qixing Huang | ICLR 2024,Poster | Masked autoencoders (MAE) have recently been introduced to 3D self-supervised pretraining for point clouds due to their great success in NLP and computer vision. Unlike MAEs used in the image domain, where the pretext task is to restore features at the masked pixels, such as colors, the existing 3D MAE works reconstruct the missing geometry only, i.e, the location of the masked points. In contrast to previous studies, we advocate that point location recovery is inessential and restoring intrinsic point features is much superior. To this end, we propose to ignore point position reconstruction and recover high-order features at masked points including surface normals and surface variations, through a novel attention-based decoder which is independent of the encoder design. We validate the effectiveness of our pretext task and decoder design using different encoder structures for 3D training and demonstrate the advantages of our pretrained networks on various point cloud analysis tasks. | https://openreview.net/pdf/2ecf6db8536bd51a51f39e4d291975d6fdf06840.pdf |
Understanding Catastrophic Forgetting in Language Models via Implicit Inference | https://openreview.net/forum?id=VrHiF2hsrm | https://openreview.net/forum?id=VrHiF2hsrm | Suhas Kotha,Jacob Mitchell Springer,Aditi Raghunathan | ICLR 2024,Poster | We lack a systematic understanding of the effects of fine-tuning (via methods such as instruction-tuning or reinforcement learning from human feedback), particularly on tasks outside the narrow fine-tuning distribution. In a simplified scenario, we demonstrate that improving performance on tasks within the fine-tuning data distribution comes at the expense of capabilities on other tasks. We hypothesize that language models implicitly infer the task of the prompt and that fine-tuning skews this inference towards tasks in the fine-tuning distribution. To test this, we propose Conjugate Prompting, which artificially makes the task look farther from the fine-tuning distribution while requiring the same capability, and we find that this recovers some of the pretraining capabilities in our synthetic setup. Since real-world fine-tuning distributions are predominantly English, we apply conjugate prompting to recover pretrained capabilities in LLMs by simply translating the prompts to different languages. This allows us to recover in-context learning abilities lost via instruction tuning, natural reasoning capability lost during code fine-tuning, and, more concerningly, harmful content generation suppressed by safety fine-tuning in chatbots like ChatGPT. | https://openreview.net/pdf/fba453c1032d524bc23b1106ab3a598979c31704.pdf |
Efficient Subgraph GNNs by Learning Effective Selection Policies | https://openreview.net/forum?id=gppLqZLQeY | https://openreview.net/forum?id=gppLqZLQeY | Beatrice Bevilacqua,Moshe Eliasof,Eli Meirom,Bruno Ribeiro,Haggai Maron | ICLR 2024,Poster | Subgraph GNNs are provably expressive neural architectures that learn graph representations from sets of subgraphs. Unfortunately, their applicability is hampered by the computational complexity associated with performing message passing on many subgraphs. In this paper, we consider the problem of learning to select a small subset of the large set of possible subgraphs in a data-driven fashion. We first motivate the problem by proving that there are families of WL-indistinguishable graphs for which there exist efficient subgraph selection policies: small subsets of subgraphs that can already identify all the graphs within the family. We then propose a new approach, called _Policy-Learn_, that learns how to select subgraphs in an iterative manner. We prove that, unlike popular random policies and prior work addressing the same problem, our architecture is able to learn the efficient policies mentioned above. Our experimental results demonstrate that _Policy-Learn_ outperforms existing baselines across a wide range of datasets. | https://openreview.net/pdf/bb667e3793f646c96519efa8bde1df46581bd0cc.pdf |
Parsing neural dynamics with infinite recurrent switching linear dynamical systems | https://openreview.net/forum?id=YIls9HEa52 | https://openreview.net/forum?id=YIls9HEa52 | Victor Geadah,International Brain Laboratory,Jonathan W. Pillow | ICLR 2024,Poster | Unsupervised methods for dimensionality reduction of neural activity and behavior have provided unprecedented insights into the underpinnings of neural information processing. One popular approach involves the recurrent switching linear dynamical system (rSLDS) model, which describes the latent dynamics of neural spike train data using discrete switches between a finite number of low-dimensional linear dynamical systems. However, a few properties of rSLDS model limit its deployability on trial-varying data, such as a fixed number of states over trials, and no latent structure or organization of states. Here we overcome these limitations by endowing the rSLDS model with a semi-Markov discrete state process, with latent geometry, that captures key properties of stochastic processes over partitions with flexible state cardinality. We leverage partial differential equations (PDE) theory to derive an efficient, semi-parametric formulation for dynamical sufficient statistics to the discrete states. This process, combined with switching dynamics, defines our infinite recurrent switching linear dynamical system (irSLDS) model class. We first validate and demonstrate the capabilities of our model on synthetic data. Next, we turn to the analysis of mice electrophysiological data during decision-making, and uncover strong non-stationary processes underlying both within-trial and trial-averaged neural activity. | https://openreview.net/pdf/69049199784ba1584b80a5be627c8e848d397879.pdf |
Active Retrosynthetic Planning Aware of Route Quality | https://openreview.net/forum?id=h7DGnWGeos | https://openreview.net/forum?id=h7DGnWGeos | Luotian Yuan,Yemin Yu,Ying Wei,Yongwei Wang,Zhihua Wang,Fei Wu | ICLR 2024,Poster | Retrosynthetic planning is a sequential decision-making process of identifying synthetic routes from the available building block materials to reach a desired target molecule.
Though existing planning approaches show promisingly high solving rates and low costs, the trivial route cost evaluation via pre-trained forward reaction prediction models certainly falls short of real-world chemical practice.
An alternative option is to annotate the actual cost of a route, such as yield, through chemical experiments or input from chemists, while
this often leads to substantial query costs.
In order to strike the balance between query costs and route quality evaluation, we propose an Active Retrosynthetic Planning (ARP) framework that remains compatible with the established retrosynthetic planners.
On one hand, the proposed ARP trains an actor that decides whether to query the cost of a reaction; on the other hand, it resorts to a critic to estimate the value of a molecule with its preceding reaction cost as input.
Those molecules with low reaction costs are preferred to expand first.
We apply our framework to different existing approaches on both the benchmark and an expert dataset and demonstrate that it outperforms the existing state-of-the-art approach by 6.2\% in route quality while reducing the query cost by 12.8\%.
In addition,
ARP consistently plans
high-quality routes with either abundant or sparse annotations. | https://openreview.net/pdf/0eba4d55c6b2d267c53e120ccbcbe1c0a441de5a.pdf |
How do Language Models Bind Entities in Context? | https://openreview.net/forum?id=zb3b6oKO77 | https://openreview.net/forum?id=zb3b6oKO77 | Jiahai Feng,Jacob Steinhardt | ICLR 2024,Poster | Language models (LMs) can recall facts mentioned in context, as shown by their performance on reading comprehension tasks. When the context describes facts about more than one entity, the LM has to correctly bind attributes to their corresponding entity. We show, via causal experiments, that LMs' internal activations represent binding information by exhibiting appropriate binding ID vectors at the entity and attribute positions. We further show that binding ID vectors form a subspace and often transfer across tasks. Our results demonstrate that LMs learn interpretable strategies for representing symbolic knowledge in context, and that studying context activations is a fruitful direction for understanding LM cognition. | https://openreview.net/pdf/28b9cffab0e6b66ae449e4a74e7e3c4798f5be18.pdf |
Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted Activations | https://openreview.net/forum?id=HfXDrAzFvG | https://openreview.net/forum?id=HfXDrAzFvG | Patricia Pauli,Aaron J Havens,Alexandre Araujo,Siddharth Garg,Farshad Khorrami,Frank Allgöwer,Bin Hu | ICLR 2024,Poster | Recently, semidefinite programming (SDP) techniques have shown great promise in providing accurate Lipschitz bounds for neural networks. Specifically, the LipSDP approach (Fazlyab et al., 2019) has received much attention and provides the least conservative Lipschitz upper bounds that can be computed with polynomial time guarantees. However, one main restriction of LipSDP is that its formulation requires the activation functions to be slope-restricted on $[0,1]$, preventing its further use for more general activation functions such as GroupSort, MaxMin, and Householder. One can rewrite MaxMin activations for example as residual ReLU networks. However, a direct application of LipSDP to the resultant residual ReLU networks is conservative and even fails in recovering the well-known fact that the MaxMin activation is 1-Lipschitz. Our paper bridges this gap and extends LipSDP beyond slope-restricted activation functions. To this end, we provide novel quadratic constraints for GroupSort, MaxMin, and Householder activations via leveraging their underlying properties such as sum preservation. Our proposed analysis is general and provides a unified approach for estimating $\ell_2$ and $\ell_\infty$ Lipschitz bounds for a rich class of neural network architectures, including non-residual and residual neural networks and implicit models, with GroupSort, MaxMin, and HouseHolder activations. Finally, we illustrate the utility of our approach with a variety of experiments and show that our proposed SDPs generate less conservative Lipschitz bounds in comparison to existing approaches. | https://openreview.net/pdf/77c8f16bf84c684bfa1e6014821209b562511cc5.pdf |
Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation | https://openreview.net/forum?id=2UnCj3jeao | https://openreview.net/forum?id=2UnCj3jeao | Luca Eyring,Dominik Klein,Théo Uscidda,Giovanni Palla,Niki Kilbertus,Zeynep Akata,Fabian J Theis | ICLR 2024,Poster | In optimal transport (OT), a Monge map is known as a mapping that transports a source distribution to a target distribution in the most cost-efficient way. Recently, multiple neural estimators for Monge maps have been developed and applied in diverse unpaired domain translation tasks, e.g. in single-cell biology and computer vision. However, the classic OT framework enforces mass conservation, which
makes it prone to outliers and limits its applicability in real-world scenarios. The latter can be particularly harmful in OT domain translation tasks, where the relative position of a sample within a distribution is explicitly taken into account. While unbalanced OT tackles this challenge in the discrete setting, its integration into neural Monge map estimators has received limited attention. We propose a theoretically
grounded method to incorporate unbalancedness into any Monge map estimator. We improve existing estimators to model cell trajectories over time and to predict cellular responses to perturbations. Moreover, our approach seamlessly integrates with the OT flow matching (OT-FM) framework. While we show that OT-FM performs competitively in image translation, we further improve performance by
incorporating unbalancedness (UOT-FM), which better preserves relevant features. We hence establish UOT-FM as a principled method for unpaired image translation. | https://openreview.net/pdf/4e007c37023074d72d5fdc49366594785523cb68.pdf |
Implicit Neural Representations and the Algebra of Complex Wavelets | https://openreview.net/forum?id=uZfjFyPAvn | https://openreview.net/forum?id=uZfjFyPAvn | T Mitchell Roddenberry,Vishwanath Saragadam,Maarten V. de Hoop,Richard Baraniuk | ICLR 2024,Poster | Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains. By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively couple spatial and spectral features of the represented signal in a way that is not obvious in the usual discrete representation. Although INRs using sinusoidal activation functions have been studied in terms of Fourier theory, recent works have shown the advantage of using wavelets instead of sinusoids as activation functions, due to their ability to simultaneously localize in both frequency and space. In this work, we approach such INRs and demonstrate how they resolve high-frequency features of signals from coarse approximations performed in the first layer of the MLP. This leads to multiple prescriptions for the design of INR architectures, including the use of progressive wavelets, decoupling of low and high-pass approximations, and initialization schemes based on the singularities of the target signal. | https://openreview.net/pdf/4204c82f9ab502c4460ea36edf09dabf851c3781.pdf |
Fiber Monte Carlo | https://openreview.net/forum?id=sP1tCl2QBk | https://openreview.net/forum?id=sP1tCl2QBk | Nick Richardson,Deniz Oktay,Yaniv Ovadia,James C Bowden,Ryan P Adams | ICLR 2024,Poster | Integrals with discontinuous integrands are ubiquitous, arising from discrete structure in applications like topology optimization, graphics, and computational geometry.
These integrals are often part of a forward model in an inverse problem where it is necessary to reason backwards about the parameters, ideally using gradient-based optimization.
Monte Carlo methods are widely used to estimate the value of integrals, but this results in a non-differentiable approximation that is amenable to neither conventional automatic differentiation nor reparameterization-based gradient methods.
This significantly disrupts efforts to integrate machine learning methods in areas that exhibit these discontinuities: physical simulation and robotics, design, graphics, and computational geometry.
Although bespoke domain-specific techniques can handle special cases, a general methodology to wield automatic differentiation in these discrete contexts is wanting.
We introduce a differentiable variant of the simple Monte Carlo estimator which samples line segments rather than points from the domain.
We justify our estimator analytically as conditional Monte Carlo and demonstrate the diverse functionality of the method as applied to image stylization, topology optimization, and computational geometry. | https://openreview.net/pdf/7d5a83d7a94183d1d9225cc666d97a8d77188d5b.pdf |
Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation | https://openreview.net/forum?id=KQe9tHd0k8 | https://openreview.net/forum?id=KQe9tHd0k8 | Shreyas Havaldar,Navodita Sharma,Shubhi Sareen,Karthikeyan Shanmugam,Aravindan Raghuveer | ICLR 2024,Poster | Learning from Label Proportions (LLP) is a learning problem where only aggregate level labels are available for groups of instances, called bags, during training, and the aim is to get the best performance at the instance-level on the test data. This setting arises in domains like advertising and medicine due to privacy considerations. We propose a novel algorithmic framework for this problem that iteratively performs two main steps. For the first step (Pseudo Labeling) in every iteration, we define a Gibbs distribution over binary instance labels that incorporates a) covariate information through the constraint that instances with similar covariates should have similar labels and b) the bag level aggregated label. We then use Belief Propagation (BP) to marginalize the Gibbs distribution to obtain pseudo labels. In the second step (Embedding Refinement), we use the pseudo labels to provide supervision for a learner that yields a better embedding. Further, we iterate on the two steps again by using the second step's embeddings as new covariates for the next iteration. In the final iteration, a classifier is trained using the pseudo labels. Our algorithm displays strong gains against several SOTA baselines (upto **15%**) for the LLP Binary Classification problem on various dataset types - tabular and Image. We achieve these improvements with minimal computational overhead above standard supervised learning due to Belief Propagation, for large bag sizes, even for a million samples. | https://openreview.net/pdf/fe1295b607d0d088fa07653fbda2b9c5a57e521f.pdf |
Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems | https://openreview.net/forum?id=WQwV7Y8qwa | https://openreview.net/forum?id=WQwV7Y8qwa | Orren Karniol-Tambour,David M. Zoltowski,E. Mika Diamanti,Lucas Pinto,Carlos D Brody,David W. Tank,Jonathan W. Pillow | ICLR 2024,Poster | Understanding how multiple brain regions interact to produce behavior is a major challenge in systems neuroscience, with many regions causally implicated in common tasks such as sensory processing and decision making. A precise description of interactions between regions remains an open problem. Moreover, neural dynamics are nonlinear and non-stationary. Here, we propose MR-SDS, a multiregion, switching nonlinear state space model that decomposes global dynamics into local and cross-communication components in the latent space. MR-SDS includes directed interactions between brain regions, allowing for estimation of state-dependent communication signals, and accounts for sensory inputs effects. We show that our model accurately recovers latent trajectories, vector fields underlying switching nonlinear dynamics, and cross-region communication profiles in three simulations. We then apply our method to two large-scale, multi-region neural datasets involving mouse decision making. The first includes hundreds of neurons per region, recorded simultaneously at single-cell-resolution across 3 distant cortical regions. The second is a mesoscale widefield dataset of 8 adjacent cortical regions imaged across both hemispheres. On these multi-region datasets, our model outperforms existing piece-wise linear multi-region models and reveals multiple distinct dynamical states and a rich set of cross-region communication profiles. | https://openreview.net/pdf/e972a0d03b94f15cfd893473ba7abbb0f7ec0e16.pdf |
Neural Language of Thought Models | https://openreview.net/forum?id=HYyRwm367m | https://openreview.net/forum?id=HYyRwm367m | Yi-Fu Wu,Minseung Lee,Sungjin Ahn | ICLR 2024,Poster | The Language of Thought Hypothesis suggests that human cognition operates on a structured, language-like system of mental representations. While neural language models can naturally benefit from the compositional structure inherently and explicitly expressed in language data, learning such representations from non-linguistic general observations, like images, remains a challenge. In this work, we introduce the Neural Language of Thought Model (NLoTM), a novel approach for unsupervised learning of LoTH-inspired representation and generation. NLoTM comprises two key components: (1) the Semantic Vector-Quantized Variational Autoencoder, which learns hierarchical, composable discrete representations aligned with objects and their properties, and (2) the Autoregressive LoT Prior, an autoregressive transformer that learns to generate semantic concept tokens compositionally, capturing the underlying data distribution. We evaluate NLoTM on several 2D and 3D image datasets, demonstrating superior performance in downstream tasks, out-of-distribution generalization, and image generation quality compared to patch-based VQ-VAE and continuous object-centric representations. Our work presents a significant step towards creating neural networks exhibiting more human-like understanding by developing LoT-like representations and offers insights into the intersection of cognitive science and machine learning. | https://openreview.net/pdf/49107366def27f8426cf313dc870595042c85c07.pdf |
Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization | https://openreview.net/forum?id=0t1O8ziRZp | https://openreview.net/forum?id=0t1O8ziRZp | Animesh Basak Chowdhury,Marco Romanelli,Benjamin Tan,Ramesh Karri,Siddharth Garg | ICLR 2024,Poster | Logic synthesis, a pivotal stage in chip design, entails optimizing chip specifications encoded in hardware description languages like Verilog into highly efficient implementations using Boolean logic gates. The process involves a sequential application of logic minimization heuristics (``synthesis recipe"), with their arrangement significantly impacting crucial metrics such as area and delay. Addressing the challenge posed by the broad spectrum of hardware design complexities — from variations of past designs (e.g., adders and multipliers) to entirely novel configurations (e.g., innovative processor instructions) — requires a nuanced 'synthesis recipe' guided by human expertise and intuition. This study conducts a thorough examination of learning and search techniques for logic synthesis, unearthing a surprising revelation: pre-trained agents, when confronted with entirely novel designs, may veer off course, detrimentally affecting the search trajectory. We present ABC-RL, a meticulously tuned $\alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process. Computed based on similarity scores through nearest neighbor retrieval from the training dataset, ABC-RL yields superior synthesis recipes tailored for a wide array of hardware designs. Our findings showcase substantial enhancements in the Quality of Result (QoR) of synthesized circuits, boasting improvements of up to 24.8\% compared to state-of-the-art techniques. Furthermore, ABC-RL achieves an impressive up to 9x reduction in runtime (iso-QoR) when compared to current state-of-the-art methodologies. | https://openreview.net/pdf/45352e5a0436249e936152d04e6550dff1475ac5.pdf |
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting | https://openreview.net/forum?id=ztpy1gsUpT | https://openreview.net/forum?id=ztpy1gsUpT | Xinlu Zhang,Shiyang Li,Xianjun Yang,Chenxin Tian,Yao Qin,Linda Ruth Petzold | ICLR 2024,Poster | Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under $privacy-restricted$ scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability. | https://openreview.net/pdf/870c2d2b9c207c1295cc917d1fe533ee69e399a9.pdf |
What Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine Similarity | https://openreview.net/forum?id=jsvvPVVzwf | https://openreview.net/forum?id=jsvvPVVzwf | Gabryel Mason-Williams,Fredrik Dahlqvist | ICLR 2024,Poster | Pruning is an effective method to reduce the size of deep neural network models, maintain accuracy, and, in some cases, improve the network's overall performance. However, the mechanisms underpinning pruning remain unclear. Why can different methods prune by different percentages yet achieve similar performance? Why can we not prune at the start of training? Why are some models more amenable to being pruned than others? Given a model, what is the maximum amount it can be pruned before significantly affecting the performance? This paper explores and answers these questions from the global unstructured magnitude pruning perspective with one epoch of fine-tuning. We develop the idea that cosine similarity is an effective proxy measure for functional similarity between the parent and the pruned network. We prove that the L1 pruning method is optimal when pruning by cosine similarity. We show that the higher the kurtosis of a model's parameter distribution, the more it can be pruned while maintaining performance. Finally, we present a simple method to determine the optimal amount by which a network can be L1-pruned based on its parameter distribution. The code demonstrating the method is available at https://github.com/gmw99/what_makes_a_good_prune | https://openreview.net/pdf/d36cb7ab7a56f325f5bb6a9e1bbccce752db0563.pdf |
Enhancing Human Experience in Human-Agent Collaboration: A Human-Centered Modeling Approach Based on Positive Human Gain | https://openreview.net/forum?id=BqEvdOS1Hs | https://openreview.net/forum?id=BqEvdOS1Hs | Yiming Gao,Feiyu Liu,Liang Wang,Dehua Zheng,Zhenjie Lian,Weixuan Wang,Wenjin Yang,Siqin Li,Xianliang Wang,Wenhui Chen,Jing Dai,QIANG FU,Yang Wei,Lanxiao Huang,Wei Liu | ICLR 2024,Poster | Existing game AI research mainly focuses on enhancing agents' abilities to win games, but this does not inherently make humans have a better experience when collaborating with these agents. For example, agents may dominate the collaboration and exhibit unintended or detrimental behaviors, leading to poor experiences for their human partners. In other words, most game AI agents are modeled in a "self-centered" manner. In this paper, we propose a "human-centered" modeling scheme for collaborative agents that aims to enhance the experience of humans. Specifically, we model the experience of humans as the goals they expect to achieve during the task. We expect that agents should learn to enhance the extent to which humans achieve these goals while maintaining agents' original abilities (e.g., winning games). To achieve this, we propose the Reinforcement Learning from Human Gain (RLHG) approach. The RLHG approach introduces a "baseline", which corresponds to the extent to which humans primitively achieve their goals, and encourages agents to learn behaviors that can effectively enhance humans in achieving their goals better. We evaluate the RLHG agent in the popular Multi-player Online Battle Arena (MOBA) game, Honor of Kings, by conducting real-world human-agent tests. Both objective performance and subjective preference results show that the RLHG agent provides participants better gaming experience. | https://openreview.net/pdf/6ce678ca6108e4a85517e2526c83d7d968f7108b.pdf |
Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis | https://openreview.net/forum?id=vY9nzQmQBw | https://openreview.net/forum?id=vY9nzQmQBw | Hubert Siuzdak | ICLR 2024,Poster | Recent advancements in neural vocoding are predominantly driven by Generative Adversarial Networks (GANs) operating in the time-domain. While effective, this approach neglects the inductive bias offered by time-frequency representations, resulting in reduntant and computionally-intensive upsampling operations. Fourier-based time-frequency representation is an appealing alternative, aligning more accurately with human auditory perception, and benefitting from well-established fast algorithms for its computation. Nevertheless, direct reconstruction of complex-valued spectrograms has been historically problematic, primarily due to phase recovery issues. This study seeks to close this gap by presenting Vocos, a new model that directly generates Fourier spectral coefficients. Vocos not only matches the state-of-the-art in audio quality, as demonstrated in our evaluations, but it also substantially improves computational efficiency, achieving an order of magnitude increase in speed compared to prevailing time-domain neural vocoding approaches. The source code and model weights have been open-sourced. | https://openreview.net/pdf/d6b6fd2b9464f306e29d42b554de0a493bb52ade.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.