The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Error code: ClientConnectionError
NIPS 2023 Accepted Paper Meta Info Dataset
This dataset is collect from the NIPS 2023 OpenReview website (https://papers.nips.cc/paper_files/paper/2023) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/nips2023). For researchers who are interested in doing analysis of NIPS 2023 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the NIPS 2023 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File of Paper
{
"title": "Scalable Membership Inference Attacks via Quantile Regression",
"url": "https://papers.nips.cc/paper_files/paper/2023/hash/01328d0767830e73a612f9073e9ff15f-Abstract-Conference.html",
"authors": "Martin Bertran, Shuai Tang, Aaron Roth, Michael Kearns, Jamie H. Morgenstern, Steven Z. Wu",
"detail_url": "https://papers.nips.cc/paper_files/paper/2023/hash/01328d0767830e73a612f9073e9ff15f-Abstract-Conference.html",
"tags": "NIPS 2023",
"Bibtex": "https://papers.nips.cc/paper_files/paper/20306-/bibtex",
"Paper": "https://papers.nips.cc/paper_files/paper/2023/file/01328d0767830e73a612f9073e9ff15f-Paper-Conference.pdf",
"abstract": "Membership inference attacks are designed to determine, using black box access to trained models, whether a particular example was used in training or not. Membership inference can be formalized as a hypothesis testing problem. The most effective existing attacks estimate the distribution of some test statistic (usually the model's confidence on the true label) on points that were (and were not) used in training by training many \\emph{shadow models}---i.e. models of the same architecture as the model being attacked, trained on a random subsample of data. While effective, these attacks are extremely computationally expensive, especially when the model under attack is large. \\footnotetext[0]{Martin and Shuai are the lead authors, and other authors are ordered alphabetically. {maberlop,shuat}@amazon.com}We introduce a new class of attacks based on performing quantile regression on the distribution of confidence scores induced by the model under attack on points that are not used in training. We show that our method is competitive with state-of-the-art shadow model attacks, while requiring substantially less compute because our attack requires training only a single model. Moreover, unlike shadow model attacks, our proposed attack does not require any knowledge of the architecture of the model under attack and is therefore truly ``black-box\". We show the efficacy of this approach in an extensive series of experiments on various datasets and model architectures. Our code is available at \\href{https://github.com/amazon-science/quantile-mia}{github.com/amazon-science/quantile-mia.}"
}
Related
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
- Downloads last month
- 3