title
stringlengths 14
150
| url
stringlengths 108
108
| authors
stringlengths 7
430
| detail_url
stringlengths 108
108
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 104
104
⌀ | Supplemental
stringlengths 111
111
⌀ | abstract
stringlengths 178
2.55k
⌀ | Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|
Conformal Prediction for Uncertainty-Aware Planning with Diffusion Dynamics Model | https://papers.nips.cc/paper_files/paper/2023/hash/fe318a2b6c699808019a456b706cd845-Abstract-Conference.html | Jiankai Sun, Yiqi Jiang, Jianing Qiu, Parth Nobel, Mykel J Kochenderfer, Mac Schwager | https://papers.nips.cc/paper_files/paper/2023/hash/fe318a2b6c699808019a456b706cd845-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/20581-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fe318a2b6c699808019a456b706cd845-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/fe318a2b6c699808019a456b706cd845-Supplemental-Conference.pdf | Robotic applications often involve working in environments that are uncertain, dynamic, and partially observable. Recently, diffusion models have been proposed for learning trajectory prediction models trained from expert demonstrations, which can be used for planning in robot tasks. Such models have demonstrated a strong ability to overcome challenges such as multi-modal action distributions, high-dimensional output spaces, and training instability. It is crucial to quantify the uncertainty of these dynamics models when using them for planning. In this paper, we quantify the uncertainty of diffusion dynamics models using Conformal Prediction (CP). Given a finite number of exchangeable expert trajectory examples (called the “calibration set”), we use CP to obtain a set in the trajectory space (called the “coverage region”) that is guaranteed to contain the output of the diffusion model with a user-defined probability (called the “coverage level”). In PlanCP, inspired by concepts from conformal prediction, we modify the loss function for training the diffusion model to include a quantile term to encourage more robust performance across the variety of training examples. At test time, we then calibrate PlanCP with a conformal prediction process to obtain coverage sets for the trajectory prediction with guaranteed coverage level. We evaluate our algorithm on various planning tasks and model-based offline reinforcement learning tasks and show that it reduces the uncertainty of the learned trajectory prediction model. As a by-product, our algorithm PlanCP outperforms prior algorithms on existing offline RL benchmarks and challenging continuous planning tasks. Our method can be combined with most model-based planning approaches to produce uncertainty estimates of the closed-loop system. | null |
Max-Sliced Mutual Information | https://papers.nips.cc/paper_files/paper/2023/hash/fe4da14f07561a232782820d30ea22f3-Abstract-Conference.html | Dor Tsur, Ziv Goldfeld, Kristjan Greenewald | https://papers.nips.cc/paper_files/paper/2023/hash/fe4da14f07561a232782820d30ea22f3-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/22801-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fe4da14f07561a232782820d30ea22f3-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/fe4da14f07561a232782820d30ea22f3-Supplemental-Conference.pdf | Quantifying dependence between high-dimensional random variables is central to statistical learning and inference. Two classical methods are canonical correlation analysis (CCA), which identifies maximally correlated projected versions of the original variables, and Shannon's mutual information, which is a universal dependence measure that also captures high-order dependencies. However, CCA only accounts for linear dependence, which may be insufficient for certain applications, while mutual information is often infeasible to compute/estimate in high dimensions. This work proposes a middle ground in the form of a scalable information-theoretic generalization of CCA, termed max-sliced mutual information (mSMI). mSMI equals the maximal mutual information between low-dimensional projections of the high-dimensional variables, which reduces back to CCA in the Gaussian case. It enjoys the best of both worlds: capturing intricate dependencies in the data while being amenable to fast computation and scalable estimation from samples. We show that mSMI retains favorable structural properties of Shannon's mutual information, like variational forms and identification of independence. We then study statistical estimation of mSMI, propose an efficiently computable neural estimator, and couple it with formal non-asymptotic error bounds. We present experiments that demonstrate the utility of mSMI for several tasks, encompassing independence testing, multi-view representation learning, algorithmic fairness, and generative modeling. We observe that mSMI consistently outperforms competing methods with little-to-no computational overhead. | null |
Neural Data Transformer 2: Multi-context Pretraining for Neural Spiking Activity | https://papers.nips.cc/paper_files/paper/2023/hash/fe51de4e7baf52e743b679e3bdba7905-Abstract-Conference.html | Joel Ye, Jennifer Collinger, Leila Wehbe, Robert Gaunt | https://papers.nips.cc/paper_files/paper/2023/hash/fe51de4e7baf52e743b679e3bdba7905-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/19786-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fe51de4e7baf52e743b679e3bdba7905-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/fe51de4e7baf52e743b679e3bdba7905-Supplemental-Conference.zip | The neural population spiking activity recorded by intracortical brain-computer interfaces (iBCIs) contain rich structure. Current models of such spiking activity are largely prepared for individual experimental contexts, restricting data volume to that collectable within a single session and limiting the effectiveness of deep neural networks (DNNs). The purported challenge in aggregating neural spiking data is the pervasiveness of context-dependent shifts in the neural data distributions. However, large scale unsupervised pretraining by nature spans heterogeneous data, and has proven to be a fundamental recipe for successful representation learning across deep learning. We thus develop Neural Data Transformer 2 (NDT2), a spatiotemporal Transformer for neural spiking activity, and demonstrate that pretraining can leverage motor BCI datasets that span sessions, subjects, and experimental tasks. NDT2 enables rapid adaptation to novel contexts in downstream decoding tasks and opens the path to deployment of pretrained DNNs for iBCI control. Code: https://github.com/joel99/contextgeneralbci | null |
Data Quality in Imitation Learning | https://papers.nips.cc/paper_files/paper/2023/hash/fe692980c5d9732cf153ce27947653a7-Abstract-Conference.html | Suneel Belkhale, Yuchen Cui, Dorsa Sadigh | https://papers.nips.cc/paper_files/paper/2023/hash/fe692980c5d9732cf153ce27947653a7-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/22203-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fe692980c5d9732cf153ce27947653a7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/fe692980c5d9732cf153ce27947653a7-Supplemental-Conference.zip | In supervised learning, the question of data quality and curation has been sidelined in recent years in favor of increasingly more powerful and expressive models that can ingest internet-scale data. However, in offline learning for robotics, we simply lack internet scale data, and so high quality datasets are a necessity. This is especially true in imitation learning (IL), a sample efficient paradigm for robot learning using expert demonstrations. Policies learned through IL suffer from state distribution shift at test time due to compounding errors in action prediction, which leads to unseen states that the policy cannot recover from.Instead of designing new algorithms to address distribution shift, an alternative perspective is to develop new ways of assessing and curating datasets. There is growing evidence that the same IL algorithms can have substantially different performance across different datasets. This calls for a formalism for defining metrics of "data quality" that can further be leveraged for data curation.In this work, we take the first step toward formalizing data quality for imitation learning through the lens of distribution shift: a high quality dataset encourages the policy to stay in distribution at test time. We propose two fundamental properties that are necessary for a high quality datasets: i) action divergence: the mismatch between the expert and learned policy at certain states; and ii) transition diversity: the noise present in the system for a given state and action. We investigate the combined effect of these two key properties in imitation learning theoretically, and we empirically analyze models trained on a variety of different data sources. We show that state diversity is not always beneficial, and we demonstrate how action divergence and transition diversity interact in practice. | null |
Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization | https://papers.nips.cc/paper_files/paper/2023/hash/fe8debfd5a36ada52e038c8b2078b2ce-Abstract-Conference.html | Jameel Abdul Samadh, Mohammad Hanan Gani, Noor Hussein, Muhammad Uzair Khattak, Muhammad Muzammal Naseer, Fahad Shahbaz Khan, Salman H. Khan | https://papers.nips.cc/paper_files/paper/2023/hash/fe8debfd5a36ada52e038c8b2078b2ce-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/20910-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fe8debfd5a36ada52e038c8b2078b2ce-Paper-Conference.pdf | null | The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top-1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art. Our source code and models are available at https://jameelhassan.github.io/promptalign | null |
SLM: A Smoothed First-Order Lagrangian Method for Structured Constrained Nonconvex Optimization | https://papers.nips.cc/paper_files/paper/2023/hash/fe90657b12193c7b52a3418bdc351807-Abstract-Conference.html | Songtao Lu | https://papers.nips.cc/paper_files/paper/2023/hash/fe90657b12193c7b52a3418bdc351807-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/21805-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fe90657b12193c7b52a3418bdc351807-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/fe90657b12193c7b52a3418bdc351807-Supplemental-Conference.pdf | Functional constrained optimization (FCO) has emerged as a powerful tool for solving various machine learning problems. However, with the rapid increase in applications of neural networks in recent years, it has become apparent that both the objective and constraints often involve nonconvex functions, which poses significant challenges in obtaining high-quality solutions. In this work, we focus on a class of nonconvex FCO problems with nonconvex constraints, where the two optimization variables are nonlinearly coupled in the inequality constraint. Leveraging the primal-dual optimization framework, we propose a smoothed first-order Lagrangian method (SLM) for solving this class of problems. We establish the theoretical convergence guarantees of SLM to the Karush-Kuhn-Tucker (KKT) solutions through quantifying dual error bounds. By establishing connections between this structured FCO and equilibrium-constrained nonconvex problems (also known as bilevel optimization), we apply the proposed SLM to tackle bilevel optimization oriented problems where the lower-level problem is nonconvex. Numerical results obtained from both toy examples and hyper-data cleaning problems demonstrate the superiority of SLM compared to benchmark methods. | null |
Red Teaming Deep Neural Networks with Feature Synthesis Tools | https://papers.nips.cc/paper_files/paper/2023/hash/febe5c5c6973f713cc43bf0f7c90edbe-Abstract-Conference.html | Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, Dylan Hadfield-Menell | https://papers.nips.cc/paper_files/paper/2023/hash/febe5c5c6973f713cc43bf0f7c90edbe-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/22497-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/febe5c5c6973f713cc43bf0f7c90edbe-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/febe5c5c6973f713cc43bf0f7c90edbe-Supplemental-Conference.zip | Interpretable AI tools are often motivated by the goal of understanding model behavior in out-of-distribution (OOD) contexts. Despite the attention this area of study receives, there are comparatively few cases where these tools have identified previously unknown bugs in models. We argue that this is due, in part, to a common feature of many interpretability methods: they analyze model behavior by using a particular dataset. This only allows for the study of the model in the context of features that the user can sample in advance. To address this, a growing body of research involves interpreting models using feature synthesis methods that do not depend on a dataset. In this paper, we benchmark the usefulness of interpretability tools for model debugging. Our key insight is that we can implant human-interpretable trojans into models and then evaluate these tools based on whether they can help humans discover them. This is analogous to finding OOD bugs, except the ground truth is known, allowing us to know when a user's interpretation is correct. We make four contributions. (1) We propose trojan discovery as an evaluation task for interpretability tools and introduce a benchmark with 12 trojans of 3 different types. (2) We demonstrate the difficulty of this benchmark with a preliminary evaluation of 16 state-of-the-art feature attribution/saliency tools. Even under ideal conditions, given direct access to data with the trojan trigger, these methods still often fail to identify bugs. (3) We evaluate 7 feature-synthesis methods on our benchmark. (4) We introduce and evaluate 2 new variants of the best-performing method from the previous evaluation. | null |
Rehearsal Learning for Avoiding Undesired Future | https://papers.nips.cc/paper_files/paper/2023/hash/fed1ea8dcc2a13f3835cc854e8c8294c-Abstract-Conference.html | Tian Qin, Tian-Zuo Wang, Zhi-Hua Zhou | https://papers.nips.cc/paper_files/paper/2023/hash/fed1ea8dcc2a13f3835cc854e8c8294c-Abstract-Conference.html | NIPS 2023 | null | null | null | null | null |
Understanding How Consistency Works in Federated Learning via Stage-wise Relaxed Initialization | https://papers.nips.cc/paper_files/paper/2023/hash/fef126561bbf9d4467dbb8d27334b8fe-Abstract-Conference.html | Yan Sun, Li Shen, Dacheng Tao | https://papers.nips.cc/paper_files/paper/2023/hash/fef126561bbf9d4467dbb8d27334b8fe-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/19915-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/fef126561bbf9d4467dbb8d27334b8fe-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/fef126561bbf9d4467dbb8d27334b8fe-Supplemental-Conference.zip | Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model via stage-wise local training processes on the heterogeneous dataset. Previous works have implicitly studied that FL suffers from the "client-drift" problem, which is caused by the inconsistent optimum across local clients. However, till now it still lacks solid theoretical analysis to explain the impact of this local inconsistency. To alleviate the negative impact of the "client drift" and explore its substance in FL, in this paper, we first design an efficient FL algorithm FedInit, which allows employing the personalized relaxed initialization state at the beginning of each local training stage. Specifically, FedInit initializes the local state by moving away from the current global state towards the reverse direction of the latest local state. This relaxed initialization helps to revise the local divergence and enhance the local consistency level. Moreover, to further understand how inconsistency disrupts performance in FL, we introduce the excess risk analysis and study the divergence term to investigate the test error of the proposed FedInit method. Our studies show that on the non-convex objectives, optimization error is not sensitive to this local inconsistency, while it mainly affects the generalization error bound in FedInit. Extensive experiments are conducted to validate this conclusion. Our proposed FedInit could achieve state-of-the-art (SOTA) results compared to several advanced benchmarks without any additional costs. Meanwhile, stage-wise relaxed initialization could also be incorporated into the current advanced algorithms to achieve higher performance in the FL paradigm. | null |
Errors-in-variables Fr\'echet Regression with Low-rank Covariate Approximation | https://papers.nips.cc/paper_files/paper/2023/hash/ff06c57ef80625386884906c2d2d2429-Abstract-Conference.html | Dogyoon Song, Kyunghee Han | https://papers.nips.cc/paper_files/paper/2023/hash/ff06c57ef80625386884906c2d2d2429-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/22954-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff06c57ef80625386884906c2d2d2429-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff06c57ef80625386884906c2d2d2429-Supplemental-Conference.zip | Fr\'echet regression has emerged as a promising approach for regression analysis involving non-Euclidean response variables. However, its practical applicability has been hindered by its reliance on ideal scenarios with abundant and noiseless covariate data. In this paper, we present a novel estimation method that tackles these limitations by leveraging the low-rank structure inherent in the covariate matrix. Our proposed framework combines the concepts of global Fr\'echet regression and principal component regression, aiming to improve the efficiency and accuracy of the regression estimator. By incorporating the low-rank structure, our method enables more effective modeling and estimation, particularly in high-dimensional and errors-in-variables regression settings. We provide a theoretical analysis of the proposed estimator's large-sample properties, including a comprehensive rate analysis of bias, variance, and additional variations due to measurement errors. Furthermore, our numerical experiments provide empirical evidence that supports the theoretical findings, demonstrating the superior performance of our approach. Overall, this work introduces a promising framework for regression analysis of non-Euclidean variables, effectively addressing the challenges associated with limited and noisy covariate data, with potential applications in diverse fields. | null |
Coupled Reconstruction of Cortical Surfaces by Diffeomorphic Mesh Deformation | https://papers.nips.cc/paper_files/paper/2023/hash/ff0da832a110c6537e885cdfbac80a94-Abstract-Conference.html | Hao Zheng, Hongming Li, Yong Fan | https://papers.nips.cc/paper_files/paper/2023/hash/ff0da832a110c6537e885cdfbac80a94-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/22325-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff0da832a110c6537e885cdfbac80a94-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff0da832a110c6537e885cdfbac80a94-Supplemental-Conference.pdf | Accurate reconstruction of cortical surfaces from brain magnetic resonance images (MRIs) remains a challenging task due to the notorious partial volume effect in brain MRIs and the cerebral cortex's thin and highly folded patterns. Although many promising deep learning-based cortical surface reconstruction methods have been developed, they typically fail to model the interdependence between inner (white matter) and outer (pial) cortical surfaces, which can help generate cortical surfaces with spherical topology. To robustly reconstruct the cortical surfaces with topological correctness, we develop a new deep learning framework to jointly reconstruct the inner, outer, and their in-between (midthickness) surfaces and estimate cortical thickness directly from 3D MRIs. Our method first estimates the midthickness surface and then learns three diffeomorphic flows jointly to optimize the midthickness surface and deform it inward and outward to the inner and outer cortical surfaces respectively, regularized by topological correctness. Our method also outputs a cortex thickness value for each surface vertex, estimated from its diffeomorphic deformation trajectory. Our method has been evaluated on two large-scale neuroimaging datasets, including ADNI and OASIS, achieving state-of-the-art cortical surface reconstruction performance in terms of accuracy, surface regularity, and computation efficiency. | null |
Active representation learning for general task space with applications in robotics | https://papers.nips.cc/paper_files/paper/2023/hash/ff4039889b7f89635e9cbd5cefffa0d4-Abstract-Conference.html | Yifang Chen, Yingbing Huang, Simon S. Du, Kevin G. Jamieson, Guanya Shi | https://papers.nips.cc/paper_files/paper/2023/hash/ff4039889b7f89635e9cbd5cefffa0d4-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/20224-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff4039889b7f89635e9cbd5cefffa0d4-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff4039889b7f89635e9cbd5cefffa0d4-Supplemental-Conference.pdf | Representation learning based on multi-task pretraining has become a powerful approach in many domains. In particular, task-aware representation learning aims to learn an optimal representation for a specific target task by sampling data from a set of source tasks, while task-agnostic representation learning seeks to learn a universal representation for a class of tasks. In this paper, we propose a general and versatile algorithmic and theoretic framework for \emph{active representation learning}, where the learner optimally chooses which source tasks to sample from. This framework, along with a tractable meta algorithm, allows most arbitrary target and source task spaces (from discrete to continuous), covers both task-aware and task-agnostic settings, and is compatible with deep representation learning practices. We provide several instantiations under this framework, from bilinear and feature-based nonlinear to general nonlinear cases. In the bilinear case, by leveraging the non-uniform spectrum of the task representation and the calibrated source-target relevance, we prove that the sample complexity to achieve $\varepsilon$-excess risk on target scales with $(k^*)^2 ||v^*||_2^2 \varepsilon^{-2}$ where $k^*$ is the effective dimension of the target and $||v^*||_2^2 \in (0,1]$ represents the connection between source and target space. Compared to the passive one, this can save up to $\frac{1}{d_W}$ of sample complexity, where $d_W$ is the task space dimension. Finally, we demonstrate different instantiations of our meta algorithm in synthetic datasets and robotics problems, from pendulum simulations to real-world drone flight datasets. On average, our algorithms outperform baselines by 20%-70%. | null |
Model and Feature Diversity for Bayesian Neural Networks in Mutual Learning | https://papers.nips.cc/paper_files/paper/2023/hash/ff521f7570d6ed23217ba5780753a1f7-Abstract-Conference.html | Van Cuong Pham, Cuong Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do | https://papers.nips.cc/paper_files/paper/2023/hash/ff521f7570d6ed23217ba5780753a1f7-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/21536-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff521f7570d6ed23217ba5780753a1f7-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff521f7570d6ed23217ba5780753a1f7-Supplemental-Conference.pdf | Bayesian Neural Networks (BNNs) offer probability distributions for model parameters, enabling uncertainty quantification in predictions. However, they often underperform compared to deterministic neural networks. Utilizing mutual learning can effectively enhance the performance of peer BNNs. In this paper, we propose a novel approach to improve BNNs performance through deep mutual learning. The proposed approaches aim to increase diversity in both network parameter distributions and feature distributions, promoting peer networks to acquire distinct features that capture different characteristics of the input, which enhances the effectiveness of mutual learning. Experimental results demonstrate significant improvements in the classification accuracy, negative log-likelihood, and expected calibration error when compared to traditional mutual learning for BNNs. | null |
Fair Graph Distillation | https://papers.nips.cc/paper_files/paper/2023/hash/ff6540c54a847ef9114a332c101f5edc-Abstract-Conference.html | Qizhang Feng, Zhimeng (Stephen) Jiang, Ruiquan Li, Yicheng Wang, Na Zou, Jiang Bian, Xia Hu | https://papers.nips.cc/paper_files/paper/2023/hash/ff6540c54a847ef9114a332c101f5edc-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/19939-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff6540c54a847ef9114a332c101f5edc-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff6540c54a847ef9114a332c101f5edc-Supplemental-Conference.zip | As graph neural networks (GNNs) struggle with large-scale graphs due to high computational demands, data distillation for graph data promises to alleviate this issue by distilling a large real graph into a smaller distilled graph while maintaining comparable prediction performance for GNNs trained on both graphs. However, we observe that GNNs trained on distilled graphs may exhibit more severe group fairness problems than those trained on real graphs. Motivated by this observation, we propose \textit{fair graph distillation} (\Algnameabbr), an approach for generating small distilled \textit{fair and informative} graphs based on the graph distillation method. The challenge lies in the deficiency of sensitive attributes for nodes in the distilled graph, making most debiasing methods (e.g., regularization and adversarial debiasing) intractable for distilled graphs. We develop a simple yet effective bias metric, called coherence, for distilled graphs. Based on the proposed coherence metric, we introduce a framework for fair graph distillation using a bi-level optimization algorithm. Extensive experiments demonstrate that the proposed algorithm can achieve better prediction performance-fairness trade-offs across various datasets and GNN architectures. | null |
Optimal testing using combined test statistics across independent studies | https://papers.nips.cc/paper_files/paper/2023/hash/ff703bfaf652f00ae7b609ce0da3fde2-Abstract-Conference.html | Lasse Vuursteen, Botond Szabo, Aad van der Vaart, Harry van Zanten | https://papers.nips.cc/paper_files/paper/2023/hash/ff703bfaf652f00ae7b609ce0da3fde2-Abstract-Conference.html | NIPS 2023 | null | null | null | null | null |
Regret-Optimal Model-Free Reinforcement Learning for Discounted MDPs with Short Burn-In Time | https://papers.nips.cc/paper_files/paper/2023/hash/ff887781480973bd3cb6026feb378d1e-Abstract-Conference.html | Xiang Ji, Gen Li | https://papers.nips.cc/paper_files/paper/2023/hash/ff887781480973bd3cb6026feb378d1e-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/21470-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff887781480973bd3cb6026feb378d1e-Paper-Conference.pdf | null | A crucial problem in reinforcement learning is learning the optimal policy. We study this in tabular infinite-horizon discounted Markov decision processes under the online setting. The existing algorithms either fail to achieve regret optimality or have to incur a high memory and computational cost. In addition, existing optimal algorithms all require a long burn-in time in order to achieve optimal sample efficiency, i.e., their optimality is not guaranteed unless sample size surpasses a high threshold. We address both open problems by introducing a model-free algorithm that employs variance reduction and a novel technique that switches the execution policy in a slow-yet-adaptive manner. This is the first regret-optimal model-free algorithm in the discounted setting, with the additional benefit of a low burn-in time. | null |
Convolutional State Space Models for Long-Range Spatiotemporal Modeling | https://papers.nips.cc/paper_files/paper/2023/hash/ff9783ec29688387d44779d67d06ef66-Abstract-Conference.html | Jimmy Smith, Shalini De Mello, Jan Kautz, Scott Linderman, Wonmin Byeon | https://papers.nips.cc/paper_files/paper/2023/hash/ff9783ec29688387d44779d67d06ef66-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/20319-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff9783ec29688387d44779d67d06ef66-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff9783ec29688387d44779d67d06ef66-Supplemental-Conference.zip | Effectively modeling long spatiotemporal sequences is challenging due to the need to model complex spatial correlations and long-range temporal dependencies simultaneously. ConvLSTMs attempt to address this by updating tensor-valued states with recurrent neural networks, but their sequential computation makes them slow to train. In contrast, Transformers can process an entire spatiotemporal sequence, compressed into tokens, in parallel. However, the cost of attention scales quadratically in length, limiting their scalability to longer sequences. Here, we address the challenges of prior methods and introduce convolutional state space models (ConvSSM) that combine the tensor modeling ideas of ConvLSTM with the long sequence modeling approaches of state space methods such as S4 and S5. First, we demonstrate how parallel scans can be applied to convolutional recurrences to achieve subquadratic parallelization and fast autoregressive generation. We then establish an equivalence between the dynamics of ConvSSMs and SSMs, which motivates parameterization and initialization strategies for modeling long-range dependencies. The result is ConvS5, an efficient ConvSSM variant for long-range spatiotemporal modeling. ConvS5 significantly outperforms Transformers and ConvLSTM on a long horizon Moving-MNIST experiment while training $3\times$ faster than ConvLSTM and generating samples $400\times$ faster than Transformers. In addition, ConvS5 matches or exceeds the performance of state-of-the-art methods on challenging DMLab, Minecraft and Habitat prediction benchmarks and enables new directions for modeling long spatiotemporal sequences. | null |
CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image Steganography | https://papers.nips.cc/paper_files/paper/2023/hash/ff99390b6e942fb1dd7023f787fb0a27-Abstract-Conference.html | Jiwen Yu, Xuanyu Zhang, Youmin Xu, Jian Zhang | https://papers.nips.cc/paper_files/paper/2023/hash/ff99390b6e942fb1dd7023f787fb0a27-Abstract-Conference.html | NIPS 2023 | https://papers.nips.cc/paper_files/paper/22328-/bibtex | https://papers.nips.cc/paper_files/paper/2023/file/ff99390b6e942fb1dd7023f787fb0a27-Paper-Conference.pdf | https://papers.nips.cc/paper_files/paper/2023/file/ff99390b6e942fb1dd7023f787fb0a27-Supplemental-Conference.pdf | Current image steganography techniques are mainly focused on cover-based methods, which commonly have the risk of leaking secret images and poor robustness against degraded container images. Inspired by recent developments in diffusion models, we discovered that two properties of diffusion models, the ability to achieve translation between two images without training, and robustness to noisy data, can be used to improve security and natural robustness in image steganography tasks. For the choice of diffusion model, we selected Stable Diffusion, a type of conditional diffusion model, and fully utilized the latest tools from open-source communities, such as LoRAs and ControlNets, to improve the controllability and diversity of container images. In summary, we propose a novel image steganography framework, named Controllable, Robust and Secure Image Steganography (CRoSS), which has significant advantages in controllability, robustness, and security compared to cover-based image steganography methods. These benefits are obtained without additional training. To our knowledge, this is the first work to introduce diffusion models to the field of image steganography. In the experimental section, we conducted detailed experiments to demonstrate the advantages of our proposed CRoSS framework in controllability, robustness, and security. | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.