license: mit
task_categories:
- text-classification
- token-classification
language:
- en
tags:
- security
- rl
- kubernetes
- terraform
- config-verification
- verifiers
- metadata-only
pretty_name: Security Verifiers E2 - Config Verification (Metadata)
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: meta
path: data/meta-*
dataset_info:
features:
- name: section
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: payload_json
dtype: string
- name: version
dtype: string
- name: created_at
dtype: string
splits:
- name: meta
num_bytes: 2380
num_examples: 6
download_size: 5778
dataset_size: 2380
π Security Verifiers E2: Security Configuration Verification (Public Metadata)
β οΈ This is a PUBLIC metadata-only repository. The full datasets are hosted privately to prevent training contamination. See below for access instructions.
Overview
E2 is a tool-grounded configuration auditing environment for Kubernetes and Terraform. This repository contains only the sampling metadata that describes how the private datasets were constructed.
Why Private Datasets?
Training contamination is a critical concern for benchmark integrity. If datasets leak into public training corpora:
- Models can memorize answers instead of learning to reason
- Evaluation metrics become unreliable
- Research reproducibility suffers
- True capabilities become obscured
By keeping evaluation datasets private with gated access, we:
- β Preserve benchmark validity over time
- β Enable fair model comparisons
- β Maintain research integrity
- β Allow controlled access for legitimate research
Dataset Composition
The private E2 datasets include:
Kubernetes Configurations
- Source: Real-world K8s manifests from popular open-source projects
- Scans: KubeLinter, Semgrep, OPA/Rego policies
- Violations: Security misconfigurations, best practice violations
- Severity: Categorized (high/medium/low) based on tool outputs
Terraform Configurations
- Source: Infrastructure-as-code from real projects
- Scans: Semgrep, OPA/Rego policies, custom rules
- Violations: Security risks, compliance issues
- Severity: Weighted scoring for reward computation
What's in This Repository?
This public repository contains:
Sampling Metadata (
sampling-*.json):- Source repository information
- File selection criteria
- Scan configurations
- Label distributions
- Reproducibility parameters
Tools Versions (
tools-versions.json):- KubeLinter version (pinned)
- Semgrep version (pinned)
- OPA version (pinned)
- Ensures reproducible scanning
This README: Instructions for requesting access
Reward Components
E2 uses tool-grounded reward functions:
- Detection Precision/Recall/F1: Against ground-truth violations
- Severity Weighting: Higher reward for catching critical issues
- Patch Delta: Reward for proposed fixes that eliminate violations
- Re-scan Verification: Patches must pass tool validation
Multi-turn performance: Models achieve ~0.93 reward with tool calling vs ~0.62 without tools.
Requesting Access
π To access the full private datasets:
- Open an access request issue: Security Verifiers Issues
- Use the title: "Dataset Access Request: E2"
- Include:
- Your name and affiliation
- Research purpose / use case
- HuggingFace username
- Commitment to not redistribute or publish the raw data
Approval criteria:
- Legitimate research or educational use
- Understanding of contamination concerns
- Agreement to usage terms
We typically respond within 2-3 business days.
Citation
If you use this environment or metadata in your research:
@misc{security-verifiers-2025,
title={Open Security Verifiers: Composable RL Environments for AI Safety},
author={intertwine},
year={2025},
url={https://github.com/intertwine/security-verifiers},
note={E2: Security Configuration Verification}
}
Related Resources
- GitHub Repository: intertwine/security-verifiers
- Documentation: See
EXECUTIVE_SUMMARY.mdandPRD.mdin the repo - Framework: Built on Prime Intellect Verifiers
- Other Environments: E1 (Network Logs), E3-E6 (in development)
Tools
The following security tools are used for ground-truth generation:
- KubeLinter: Kubernetes YAML linting and security checks
- Semgrep: Pattern-based static analysis for K8s and Terraform
- OPA: Policy-as-code validation with Rego
License
MIT License - See repository for full terms.
Contact
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with β€οΈ for the AI safety research community