Rust-Analyzer Semantic Analysis Dataset - Deployment Summary
π Successfully Created HuggingFace Dataset!
Dataset Statistics
- Total Records: 532,821 semantic analysis events
- Source Files: 1,307 Rust files from rust-analyzer codebase
- Dataset Size: 29MB (compressed Parquet format)
- Processing Phases: 3 major compiler phases captured
Phase Breakdown
Parsing Phase: 440,096 records (9 Parquet files, 24MB)
- Syntax tree generation and tokenization
- Parse error handling and recovery
- Token-level analysis of every line of code
Name Resolution Phase: 43,696 records (1 Parquet file, 2.2MB)
- Symbol binding and scope analysis
- Import resolution patterns
- Function and struct definitions
Type Inference Phase: 49,029 records (1 Parquet file, 2.0MB)
- Type checking and inference decisions
- Variable type assignments
- Return type analysis
Technical Implementation
- Format: Parquet files with Snappy compression
- Git LFS: All files under 10MB for optimal Git LFS performance
- Schema: Strongly typed with 20 columns per record
- Chunking: Large files automatically split for size limits
Repository Structure
rust-analyser-hf-dataset/
βββ README.md # Comprehensive documentation
βββ .gitattributes # Git LFS configuration
βββ .gitignore # Standard ignore patterns
βββ parsing-phase/
β βββ data-00000-of-00009.parquet # 3.1MB, 50,589 records
β βββ data-00001-of-00009.parquet # 3.0MB, 50,589 records
β βββ data-00002-of-00009.parquet # 2.6MB, 50,589 records
β βββ data-00003-of-00009.parquet # 2.4MB, 50,589 records
β βββ data-00004-of-00009.parquet # 3.1MB, 50,589 records
β βββ data-00005-of-00009.parquet # 2.2MB, 50,589 records
β βββ data-00006-of-00009.parquet # 2.6MB, 50,589 records
β βββ data-00007-of-00009.parquet # 3.4MB, 50,589 records
β βββ data-00008-of-00009.parquet # 2.1MB, 35,384 records
βββ name_resolution-phase/
β βββ data.parquet # 2.2MB, 43,696 records
βββ type_inference-phase/
βββ data.parquet # 2.0MB, 49,029 records
Data Schema
Each record contains:
- Identification:
id
,file_path
,line
,column
- Phase Info:
phase
,processing_order
- Element Info:
element_type
,element_name
,element_signature
- Semantic Data:
syntax_data
,symbol_data
,type_data
,diagnostic_data
- Metadata:
processing_time_ms
,timestamp
,rust_version
,analyzer_version
- Context:
source_snippet
,context_before
,context_after
Deployment Readiness
β Git Repository: Initialized with proper LFS configuration β File Sizes: All files under 10MB for Git LFS compatibility β Documentation: Comprehensive README with usage examples β Metadata: Proper HuggingFace dataset tags and structure β License: AGPL-3.0 consistent with rust-analyzer β Quality: All records validated and properly formatted
Next Steps for HuggingFace Hub Deployment
- Create Repository:
https://huggingface.co/datasets/introspector/rust-analyser
- Add Remote:
git remote add origin https://huggingface.co/datasets/introspector/rust-analyser
- Push with LFS:
git push origin main
- Verify Upload: Check that all Parquet files are properly uploaded via LFS
Unique Value Proposition
This dataset is unprecedented in the ML/AI space:
- Self-referential: rust-analyzer analyzing its own codebase
- Multi-phase: Captures 3 distinct compiler processing phases
- Comprehensive: Every line of code analyzed with rich context
- Production-ready: Generated by the most advanced Rust language server
- Research-grade: Suitable for training code understanding models
Use Cases
- AI Model Training: Code completion, type inference, bug detection
- Compiler Research: Understanding semantic analysis patterns
- Educational Tools: Teaching compiler internals and language servers
- Benchmarking: Evaluating code analysis tools and techniques
π Ready for Deployment!
The dataset is now ready to be pushed to the HuggingFace Hub at: https://huggingface.co/datasets/introspector/rust-analyser
This represents a significant contribution to the open-source ML/AI community, providing unprecedented insight into how advanced language servers process code.