openreview-pdfs / README.md
sumuks's picture
sumuks HF Staff
Update README.md
e8dd89c verified
metadata
license: mit
task_categories:
  - text-classification
  - document-question-answering
language:
  - en
tags:
  - academic-papers
  - openreview
  - research
  - pdf
  - machine-learning
size_categories:
  - 1K<n<10K

OpenReview PDFs Dataset

🎯 Overview

This dataset contains 7,814 PDF files from OpenReview, representing a comprehensive collection of machine learning and AI research papers. The papers are organized in clean subdirectories for efficient access and processing.

πŸ“ Repository Structure

data/
β”œβ”€β”€ small_papers/     # 1,872 files (< 500KB each)
β”œβ”€β”€ medium_papers/    # 4,605 files (500KB - 5MB each)
└── large_papers/     # 1,337 files (>= 5MB each)

πŸš€ Quick Start

Installation

pip install datasets pdfplumber

Basic Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("sumuks/openreview-pdfs")

# Access a PDF
sample = dataset['train'][0]
pdf_obj = sample['pdf']

# Extract text from first page
if pdf_obj.pages:
    text = pdf_obj.pages[0].extract_text()
    print(text[:500])  # First 500 characters

Advanced Usage

import pandas as pd

# Extract metadata from all PDFs
pdf_data = []

for i, sample in enumerate(dataset['train']):
    pdf_obj = sample['pdf']
    
    # Get metadata
    title = "Unknown"
    author = "Unknown"
    if hasattr(pdf_obj, 'metadata') and pdf_obj.metadata:
        title = pdf_obj.metadata.get('Title', 'Unknown')
        author = pdf_obj.metadata.get('Author', 'Unknown')
    
    # Extract first page text
    first_page_text = ""
    if pdf_obj.pages and len(pdf_obj.pages) > 0:
        first_page_text = pdf_obj.pages[0].extract_text() or ""
    
    pdf_data.append({
        'index': i,
        'title': title,
        'author': author,
        'num_pages': len(pdf_obj.pages) if hasattr(pdf_obj, 'pages') else 0,
        'first_page_text': first_page_text
    })
    
    # Progress indicator
    if i % 100 == 0:
        print(f"Processed {i} PDFs...")

# Create DataFrame for analysis
df = pd.DataFrame(pdf_data)
print(f"Dataset summary:\n{df.describe()}")

Filtering by Content Size

# Filter papers by number of pages
short_papers = []
long_papers = []

for sample in dataset['train']:
    pdf_obj = sample['pdf']
    if hasattr(pdf_obj, 'pages'):
        num_pages = len(pdf_obj.pages)
        if num_pages <= 10:
            short_papers.append(sample)
        elif num_pages >= 20:
            long_papers.append(sample)

print(f"Short papers (≀10 pages): {len(short_papers)}")
print(f"Long papers (β‰₯20 pages): {len(long_papers)}")

πŸ“Š Dataset Statistics

  • Total PDFs: 7,814
  • Small Papers: 1,872 files (< 500KB)
  • Medium Papers: 4,605 files (500KB - 5MB)
  • Large Papers: 1,337 files (β‰₯ 5MB)
  • Source: OpenReview platform
  • Domain: Machine Learning, AI, Computer Science

πŸ”¬ Research Applications

Document Understanding

# Extract paper structure
for sample in dataset['train'][:5]:
    pdf_obj = sample['pdf']
    
    print(f"Pages: {len(pdf_obj.pages)}")
    
    # Analyze page structure
    for i, page in enumerate(pdf_obj.pages[:3]):  # First 3 pages
        text = page.extract_text()
        if text:
            lines = text.split('\n')
            print(f"Page {i+1}: {len(lines)} lines")

Academic Text Mining

# Extract research topics and keywords
import re

keywords = {}
for sample in dataset['train'][:100]:  # Sample first 100 papers
    pdf_obj = sample['pdf']
    
    if pdf_obj.pages:
        # Extract abstract (usually on first page)
        first_page = pdf_obj.pages[0].extract_text()
        
        # Simple keyword extraction
        if 'abstract' in first_page.lower():
            # Extract common ML terms
            ml_terms = ['neural', 'learning', 'algorithm', 'model', 'training', 
                       'optimization', 'deep', 'network', 'classification', 'regression']
            
            for term in ml_terms:
                if term in first_page.lower():
                    keywords[term] = keywords.get(term, 0) + 1

print("Most common ML terms:")
for term, count in sorted(keywords.items(), key=lambda x: x[1], reverse=True):
    print(f"{term}: {count}")

Citation Analysis

# Extract citation patterns
import re

citation_patterns = []

for sample in dataset['train'][:50]:
    pdf_obj = sample['pdf']
    
    if pdf_obj.pages:
        # Look for references section
        for page in pdf_obj.pages:
            text = page.extract_text()
            if text and 'references' in text.lower():
                # Simple citation extraction
                citations = re.findall(r'\[\d+\]', text)
                citation_patterns.extend(citations)

print(f"Found {len(citation_patterns)} citation references")

πŸ› οΈ Technical Details

PDF Processing

  • Library: Uses pdfplumber for PDF processing
  • Text Extraction: Full-text extraction with layout preservation
  • Metadata Access: Original document metadata when available
  • Image Support: Can extract images and figures (see pdfplumber docs)

Performance Tips

# For large-scale processing, use streaming
dataset_stream = load_dataset("sumuks/openreview-pdfs", streaming=True)

# Process in batches
batch_size = 10
batch = []

for sample in dataset_stream['train']:
    batch.append(sample)
    
    if len(batch) >= batch_size:
        # Process batch
        for item in batch:
            pdf_obj = item['pdf']
            # Your processing here
        
        batch = []  # Reset batch

Memory Management

# For memory-efficient processing
def process_pdf_efficiently(sample):
    pdf_obj = sample['pdf']
    
    # Extract only what you need
    metadata = {
        'num_pages': len(pdf_obj.pages) if hasattr(pdf_obj, 'pages') else 0,
        'title': pdf_obj.metadata.get('Title', '') if hasattr(pdf_obj, 'metadata') and pdf_obj.metadata else ''
    }
    
    # Extract text page by page to avoid loading entire document
    first_page_text = ""
    if pdf_obj.pages:
        first_page_text = pdf_obj.pages[0].extract_text() or ""
    
    return metadata, first_page_text

# Use generator for memory efficiency
def pdf_generator():
    for sample in dataset['train']:
        yield process_pdf_efficiently(sample)

πŸ“ˆ Use Cases

  1. Large Language Model Training: Academic domain-specific text
  2. Information Retrieval: Document search and recommendation
  3. Research Analytics: Trend analysis and impact prediction
  4. Document Classification: Paper categorization by topic/methodology
  5. Citation Networks: Academic relationship mapping
  6. Text Summarization: Abstract and conclusion extraction
  7. Knowledge Extraction: Methodology and result mining

πŸ” Quality Notes

  • All PDFs are verified and accessible
  • Original filenames and metadata preserved where possible
  • Organized structure for efficient browsing and filtering
  • Compatible with standard PDF processing libraries

πŸ“ Citation

If you use this dataset in your research, please cite:

@misc{sanyal2025sparkscientificallycreativeidea,
      title={Spark: A System for Scientifically Creative Idea Generation}, 
      author={Aishik Sanyal and Samuel Schapiro and Sumuk Shashidhar and Royce Moon and Lav R. Varshney and Dilek Hakkani-Tur},
      year={2025},
      eprint={2504.20090},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2504.20090}, 
}

πŸ“„ License

MIT License - Please respect the original licensing and copyright of individual papers. This dataset is provided for research and educational purposes.

πŸ™ Acknowledgments

  • OpenReview: For hosting and providing access to academic research
  • Research Community: For contributing valuable academic content
  • HuggingFace: For providing the datasets infrastructure
  • PDF Processing Libraries: pdfplumber and related tools

πŸ› Issues & Support

If you encounter any issues with the dataset:

  1. Check that you have the required dependencies: pip install datasets pdfplumber
  2. Ensure you're using the latest version of the datasets library
  3. For PDF-specific issues, refer to the pdfplumber documentation
  4. Report dataset issues on the HuggingFace discussion page

πŸ”„ Updates

This dataset was created in 2025 and represents a snapshot of OpenReview content. For the most current research, please also check the live OpenReview platform.


Happy Researching! πŸš€